-->
Gavia Libraria

A veritable sting

Now that we all know that Bohannon’s Science “sting” was embarrassing pseudo-science, it seems well worth considering how to do better and fairer ones.

The usual means that academic stakeholders, from tenure-and-promotion committees to collection-development librarians, use to judge journal quality are rapidly proving untrustworthy and gaming-prone. (Several never should have been relied upon to the extent they are, but leave that aside for now.) Peer review has known biases and other problems and passes too many bad works, not to mention that it is easy for a journal to claim peer review without actually doing it, or while faking it. Low acceptance rates are a dumb indicator, full stop, especially with the advent of the reputable mega-journal. The less said about opaque, gameable journal impact factor, the better, though it is worth recalling that judging journals (rather than authors who publish in them) is its actual function. Whatever one’s opinion of JIF, however, the continued existence of many, many journals with no or essentially zero JIF is undeniable, meaning that JIF cannot serve to weed out bad journals.

Negative indicators don’t work terribly well at present either. Library cancellations don’t help much because of Big Deal bundling. Retractions are endemic, often opaque, and unfortunately not performed by many journals (along the entire quality continuum) even when urgently needed. (The Retraction Watch weblog is one of the Loon’s favorites, and she recommends it unhesitatingly to everyone in academe, academic librarianship, and academic publishing.)

What’s left? Not a great deal. Might we not create something, then?

Make no mistake, doing systematic sting work would be an enormous undertaking. Bohannon, despite selling out his training and reputation for a cheap hit at OA, did have one methodological problem right: too many journals sit on articles far too long, making a real sting operation tedious, long-term work. Length of consideration process, however, is a testable quality indicator—even better, one many authors care deeply about. Creating a reporting mechanism where authors can rate and answer relatively simple questions about their experiences with various journals seems worthwhile. Anti-gaming and anti-senseless-grudge guards would need to be in place; partnering with an academic-identity enterprise such as ORCID or an article-identity enterprise such as CrossRef might be wise, to start.

Doing stings with fake papers adds the burden of writing or otherwise securing half-believable fake papers across a broad spectrum of disciplines. For methodological consistency, too, the believability level would have to be roughly even across all the fakes, and the Loon isn’t even sure how to measure the believability of a fake paper! Perhaps the Nigerian-spam tactic is easiest, on the whole: it won’t catch all or even most bad journals, but the journals it does catch will be known to be truly execrable.

The Loon might instead suggest a plagiarism sting, however. Cooperating authors, publishers, or pre/post-print repositories—copyright or copyright license permitting—could quietly donate published papers (or even better, their pre-review manuscripts) for resubmission to the journals being tested. (All authors of all papers employed in plagiarism stings should receive a receipt from the testers, to protect them when journals do the right thing and report the “attempted plagiarism” to institutions or grant agencies.) Two or perhaps three levels of testing then become possible: testing for straight republication, republication of a faithful translation, and (most labor-intensively) republication of what rhetoric scholars and information-literacy librarians call a “patchwritten” article.

Journals that fail the initial test could perhaps be marked for further testing, including fake-paper testing. Reporting—well, that begs questions. Should journals be quietly warned that they have failed once and are under scrutiny? Or is immediate public transparency and inviting the journal editors to respond preferable?

Who might fund or do this work? Publishers, scholarly societies, and publishing-trade organizations are too compromised to be trustworthy process managers, sadly, though some of them might be shamed into providing funding for a trustworthy third-party tester. The DOAJ might do this on the open-access side, but that leaves no counterpart for toll-access journals (which, says the Loon’s activist heart, might be no bad thing for OA). Electronic journal aggregators might, as an internal quality measure, though they might not in order to continue to dazzle credulous eyes with the size of Big Deals. Perhaps the folks behind DORA could crowdsource something credibly? Perhaps ARL, if the work can be firewalled away from SPARC? Perhaps a major research-library consortium? Perhaps a startup?

The Loon quite despairs of teaching academics (much less politicians—and yes, this also is a concern!) to gauge journal quality properly. The inevitable corollary is that academe needs more and better quick-use quality criteria. Let us all consider how best to accomplish this.

7 thoughts on “A veritable sting

    1. Library Loon Post author

      Until a preponderance of journals do open review, it is not a reliable discriminator. Even should a preponderance of journals do it someday, it is not a quick discriminator, and can be gamed for the casual analyst (such as a politician or bureaucrat) in much the same ways many online review systems (Amazon, Yelp, et cetera) now are.

      1. Mike Taylor

        At the risk of paralleling the discussion we’re also having on Twitter …

        I think one of the great things about open peer review as a solution is that it works on journal-by-journal basis. PeerJ doesn’t need to wait for Nature to implement it before it can use it to validate the strength of its own review process.

        In short, any journal that wants to show that its review is solid can show that by making the reviews open. At some point, we’re going to start thinking that journals which don’t do this have something to hide.

        Yelp reviews can be games because the reputation of the reviewer is not at stake. An academic who puts her name to false peer-reviews will very quickly find that game is not worth playing. With the emergence of ORCID, reputations of individuals are becoming easier to track even outside of the disciplines where their work is best known.

        While we wait for ORCID to become more pervasive, we have intra-platform reputational links. To go back to my favourite example of PeerJ, the reviews page for my article goes some way to showing the respect that that the handling editor deserves by linking to a PeerJ profile page that shows he’s authored one PeerJ paper and edited three, and which links to his Academia.edu and Google Scholar profiles. It’s not hard for someone wanting to judge the quality of our reviews to follow a couple of those links and see what a big hitter John Hutchinson is.

        PeerJ’s system is far from perfect, of course: of our actual reviewers, one chose to remain anonymous (I can’t imagine why) and the other has set his profit to private, so you can’t tell anything about him. But I think at the very least, this system shows the way.

  1. Jeroen Bosman

    I second the open peer review option. But I fear that for many years to come there will be no single ideal quick quality measure. It will remain necessary to combine information from various sources and then make up your mind. Being a bit of a number fetishist I also see a role for something like the SJR (SCImago journal rank) as *one of the* sources to include when gauging journal standing.

  2. The Digital Drake

    “Creating a reporting mechanism where authors can rate and answer relatively simple questions about their experiences with various journals seems worthwhile. Anti-gaming and anti-senseless-grudge guards would need to be in place; partnering with an academic-identity enterprise such as ORCID or an article-identity enterprise such as CrossRef might be wise, to start.”

    In practice, I think what’s most likely to work here is a strong editor (or editors). You need an editor who maintains a forum where they, and contributors, can report on their experiences with scholarly journals, to inform people who want to know about good and bad forums to publish, review, edit, and read work in. “Good” and “bad” can be measured on a number of axes: how credible their review process is, how long their turnaround time is, how accommodating they are to open access and other reader concerns, and how reasonable or predatory their pricing or contracts are. All of these concerns are of potential interest to the scholarly community.

    You need an editor who has the time and the obsession to keep up the forum; an editor who has a good sense of forum moderation (and can decide, for instance, when to require attribution of a post to a known academic, and when anonymous posts should be allowed; and be able to detect and deal with trolls, defamers, shills, and sock-puppets). You need an editor well-enough connected into the world of scholarship to attract a critical mass of reviews and reports from scholars. And an editor whose sponsor, if any, will back them up when they get heat. (Because, as we’ve seen over the last couple of years, anyone who poses a serious and credible threat to questionable publishers’ business is likely to get lawsuit threats, and possibly actual lawsuits.)

    The better-known publisher-watch projects that I know of seem to be based on the strong-editor model, including Writer Beware (two dedicated editors, backed by SFWA), and, yes, Scholarly Open Access (one dedicated editor, supported by his library). While I have problems with the judgment of the latter’s editor in some important respects, his tenacity at investigating and reporting on questionable publishers, day in and day out, has had a lot to do with the visibility and attention his work has gotten. If others want to advance a different view of abusive publishers (open access or not), they’re probably going to need to be willing and able to put in at least as much effort.

    (I am not volunteering here. I already have my own editorial obsessions that take quite enough of my time, thank you very much; and there are other professionals much more qualified than I to collect and assess relevant information on journals and publishers.)

    While this is a big task, I don’t think it’s beyond the grasp of an editor, or a small group of editors, cultivating a larger community of contributors. When it comes down to it, the scope of scholarly publishing is quite a bit smaller than the scope of Amazon or Yelp. A few thousand reports a year could cover the ground pretty well, and there are a number of forums online now moderated by one person or a small team than handle that scale well.

    It’s by no means certain that the right people, with the right backing, will show up for this. But it’s seems to me a real possibility, especially if there are enough people and institutions that would find the results of value.

      1. The Digital Drake

        You’re welcome to do so; I’m glad you found it worth promoting! Feel free to clean up or elide any confusing or unnecessary bits, or add your own suggestions.