Roughing out a new system for identifying useless journals
The hot news in the generally rather staid world of scholarly communication is the sudden disappearance of Jeffrey Beall and his eponymous List. The Loon will not particularly miss it, to be sure, but she regrets to say that given the outcry apparent on social media, many will.
The last thing open access needs is for another of its enemies to take up Beall’s battle standard. It seems safe to ignore Cabell’s vaguely-announced offering in this space, as it will doubtless be subscription-only and will face an uphill battle for market penetration and awareness. As for Think-Check-Submit, find the Loon five authors who actually use it semi-regularly and she might rethink her earlier sharp skepticism.
Like it or no—and the Loon doesn’t, particularly—given the context of Beall’s List’s former popularity, anything purporting to replace it needs to be as simple to use as it is. A journal is reputable or it isn’t, after all. An A-F grading system à la Terms of Service; Didn’t Read might also do. In either system, transparency of grading criteria will be vital. The criteria needn’t be ground into a casual user’s face, of course, but they must be available. They must also apply equally to all journals regardless of business model; the Loon is quite quite sick of toll-access Big Deals packed with citation cartels, guano like Chaos, Solitons, and Fractals, or the Australasian Journals of Clinician Scammery, and she is all in favor of such publishers earning the guerdon of their guano.
She also believes such a list had better investigate journals rather than publishers, at least to start. Did you know, for example, that both Chaos, Solitons, and Fractals and the six Australasian Journals of Clinician Scammery were published by none other than Elsevier? As ludicrous to condemn an entire publisher’s output over a tiny fraction of that output as to exempt any publisher from scrutiny altogether. Now, once a certain percentage of a given publisher’s offerings come out smelling of guano, it seems fairly safe to write off that publisher, but in the interests of lawsuit avoidance, the Loon had rather present the percentages of known guano and as-yet-uninvestigated journals unvarnished; she believes even the rankest publishing neophyte can sort out how to react to that.
What might suitable evaluation criteria be, and what system can be created and sustained to evaluate journals against them? That is the hard part, of course. There can be a tension between the usefulness of a criterion and its ease of investigation. For example, scam journals often have faux persons on their editorial boards, or real persons who did not consent to serve. This is obviously awful, but it is also highly time-consuming to catch a journal at. Easy criteria to judge, such as “does it tout Ulrich’s membership?” are also easy for a scam journal to defeat (though the Loon is constantly astonished at how many don’t).
Because it is sensible to bootstrap something fast and adapt it as opportunity presents, the Loon inclines toward starting with easy criteria that don’t produce giant numbers of false positives. These might include (but doubtless would not be limited to):
- Being on the DOAJ’s list of lying liars who lie. (Other lies should also disqualify a journal, but this particular lie has the benefit of being easy to check thanks to DOAJ’s list.)
- Not being indexed in DOAJ, or analogous reasonably reputable indexes such as Web of Science. The Loon is not entirely sure this should be a deal-killer long-term, but as a bootstrap criterion it should be fairly solid.
- Spamming calls for papers, if a suitable spam-collection mechanism can be developed.
- The usual “indexed in Ulrich’s” and “indexed in Google Scholar” nonsense claims. “Look, we have an ISSN, aren’t we shiny?” might not be a disqualifier, but it certainly adds a slight odor of guano.
- Being publicly caught publishing total garbage. (Over time, this criterion could be expanded into a statistically well-run sting operation. The Loon would not be at all averse to such a scheme, as long as toll-access journals get their share of guano to desk-reject!)
- Domain hijacking. (This is usually fairly easy to ferret out with a few whois searches. Any competent e-resources or systems librarian will have little difficulty!)
- Potemkin squatter journal publishing essentially nothing. (A check of the Wayback Machine will often hint at how long a Potemkin journal has been pretending to publish.)
Rather than recruit a large crowd of journal reviewers from the get-go in hopes of building a large backing database, the Loon would be inclined to recruit a smaller cadre and set up the new system in two layers: if the system already has a score for a given journal, present it; otherwise, pass the journal to the reviewer cadre for scoring. This keeps the reviewer cadre from being utterly overwhelmed by the sea of web objects falsely calling themselves journals, most of which see near-zero author interest.
Take these notions for what they are worth; the Loon asserts no ownership in them. By all means improve on them!
“She also believes such a list had better investigate journals rather than publishers, at least to start.”
The Loon is a sensible bird.
Does such a system need an institutional home? This feels like a double-edged sword; on the one hand, an (ideally disinterested) organization could readily bat away the inevitable legal threats in much the way that Beall’s status as a tenured academic presumably protected him. On the other hand: what organizations exist that are disinterested with respect to any particular publisher, library, or scholarly association that also have a solid track record of intestinal fortitude? And how would the effort get funded?
But the thing about Think-Check-Submit is that authors don’t need to use it regularly—they can look at the site just once, say to themselves “Ah, I never thought to look for X and Y!”, and then keep those things in mind in the future. This in the end isn’t all that different from the non-nuanced (and troublesome) views new graduate students assimilate quickly from more senior colleagues: “don’t bother publishing in any journal with an impact factor lower than N”, “don’t publish in online journals” (by which they actually mean online-only journals), and, in some fields, “don’t publish with any publisher that makes you pay”.
I completely understand the need to not offend any publisher by stating that all of their journals are potentially fraudulent. However, from this long-time reader of Beall’s list, my impression of his research found that frequently a predatory journal was only the tip of a particular publisher’s systemic predatory publishing racket. In this system of predatory publishing, a journal is a loosely applied word to a vaporware service, extorting monies out of authors. These are the same systems that Beall’s list found which listed actual journal reviewers and editors which were not in fact associated with said journal/publisher.
I am not stating that a bad journal should typically cause a publisher to be labeled as fraudulent, but it should not be so easily avoided. Yes Elsevier may have had one or two fraudulent journals, but Elsevier is not part of a systemic predatory publishing, allegedly. Author boycotts of Elsevier may imply otherwise.