-->
Gavia Libraria

Who knows whose journals?

One thread of discourse in the aftermath of Beall’s List dissolution has concerned the ability of scholars to distinguish acceptable from unacceptable journals. According to a number of scholars who have weighed in on this question, everyone with the guts to call themselves scholars should be able to do this. Implied: doing so is fast and easy!

Really? The Loon is unsure that every scholar should be able to do this, much less that doing it is always fast or easy. She is quite, quite sure that not every scholar can. (Consider these no-doubt charming and probably senior scholars, wholly unaware that journals have a political economy, much less how it works. The phrase “lambs to the slaughter” arises in the back of the Loon’s walnut-sized brain.) As is her wont, she will unpack her sureties and less-than-sureties here.

With her former-scholarly-communication-librarian chapeau donned, the Loon can assert from direct experience that perfectly cromulent scholars get snookered by journals made of guano (on all sides of the business-model fence). Some of these scholars, yes, are rather young. Not all, though, and in the Loon’s experience it’s the senior scholars who cling most tenaciously to guano journals they have chosen—especially if they’ve paid an author-side fee, curiously enough; sunk-cost argument combined with lessened need for journal-conferred prestige, the Loon guesses. As for faux conferences, by now there is an entire confessional mini-genre of reputable scholars blustering about how a faux conference took them to the proverbial cleaners.

As yet, none of the “they oughtta just know!” proponents has confronted these phenomena save by directly or indirectly shaming, at least once quite deeply and hurtfully, those taken in. Way to make friends and influence people, yet again, open-access movement.

The Loon also occasionally finds herself counseling her librarian friends about the likely reputability of a journal or conference. Librarians often find ourselves facing the challenge of judging reputability in somebody else’s field of expertise. We can’t just cast an eye down the editorial board or the list of article titles. Unless we want to do an hour or so worth of web searches and send an awful lot of cold emails by way of investigating editorial boards, we have two main arenas for a speedy judgment: the usual surface heuristics, and checking indexes and whitelists. Easy enough, the Loon supposes, but not always at-a-glance easy, or the Loon would not need to counsel any professional colleagues at all… and at least two guano-journal purveyors the Loon won’t name but her gentle readers can probably guess at would be out of business by now.

It may help to remember that journal websites have fairly stereotyped structures and content that guano journals and conferences deliberately try to imitate—indeed, the DOAJ lays them out with admirable clarity. Some guano efforts do not do so terribly well at imitation; those are the easy ones to call out. Judging by the Loon’s own experience and the confessional mini-genre, conferences are easier to fake than journals.

A journal that eschews obvious howlers on its web presence, however, might pass itself off as a new journal (there’s lots of them, after all!) quite easily. A guano publisher likewise, at least for a time. (The tension for purveyors of guano is that yet another middle-of-the-road journal may not attract the gullible the way some howlers apparently do.) Even when guano publishers lie outright about their peer-review practices. Even when editorial boards are misappropriated or fake—seriously, those who advocate actively checking as an evaluation measure, who has time for that? Even if it’s in DOAJ, for that matter—everything DOAJ’s current base checklist requires (rather than recommends) is fake-able by a guano purveyor at minor cost, something the Loon has complained about previously. Even this generally good and useful evaluation form has only four easy-to-check unfakeable elements (web search results for journal and publisher, number of articles published, appearance in disciplinary indexes) and two of those (web search results) seem prone to gaming and false negatives. As for name similarity—quick, make a list of all the journal names in your discipline you can think of. How many of them are to-the-letter accurate? Did you list every single journal a guano purveyor might name-squat?

In other words, easily-distinguishable guano comes from purveyors either wholly clueless about (Western?) academic norms or not trying terribly hard to adhere to them. The Loon does not think those two categories cover all the guano out there.

Gold open-access proponents might therefore care to rethink an evaluation strategy that relies on “just knowing” whether a journal is reputable. That road leads to worried authors eschewing all new journals just in case; this disadvantages gold journals, which skew new. Not a few other new journals are products of big-pig publishers’ squatter strategy, which at least hints that their publishers are not putting a great deal of effort into them. The Loon isn’t especially sure she could easily tell a new squatter journal from a new dishonest journal run by smart guano purveyors. Could you? How?

If you just answered “journal impact factor,” kindly make yourself a dunce cap, march yourself to the corner of your office, turn your face to the wall, and think about what you have done. For shame.

Publisher reputation, perhaps? Well, that’s an… interesting… approach for several reasons. One, of course, is that “reputable” publishers publish guano journals and always have. (More charitably, any journal publisher of decent size—over a couple of dozen in its stable, say—has journals of widely variant age, quality, and reputation.) Another, of course, is that this measure again disadvantages a lot of gold-OA journals, often though not always unfairly. Elsevier must be laughing down their sleeves just now, watching avowed open-access advocates fire at their own feet. A third, of course, is that this measure unfairly advantages journals published in North America, Europe, Japan, Australia, and arguably China, to the detriment of the rest of the world (and especially local knowledge therein). How many of those claiming all scholars know which journals in their discipline are reputable “just know” which African journals are? The Loon certainly can’t claim that in her own bailiwick; she would have to do some fairly intense and time-consuming research to sort this out to her own satisfaction.

This observation leads the Loon to a truism: nearly any discipline worthy the name has vastly more decent journals than any individual scholar is familiar with, and the larger the boundary around “discipline” is drawn, the more true this is. The Loon can reel off pretty close to all the English-language LIS journals specializing in scholarly communication and research-data management. Expand that to academic librarianship, and the Loon can list almost all glamour mags and some midlisters, plus the names of big-pig publishers with relevant journals. (She would miss a fair few less-applied journals—history, philosophy, theory—as those are not her jam.) Expand to all of LIS, and making the Loon look an unlettered fool is quite trivial: start with hardcore information retrieval, also not the Loon’s jam, but certainly an area of immense amounts of active research.

Of course you know your subfield’s glamour mags; almost all of us do, in our own subfields. Can you tell an unfamiliar indie midlister from guano just by looking? Are you sure? (LIS researchers: start your engines, this would be a fascinating study! No fair only using graduate-student subjects. Hit up senior scholars.) If you found the article via random search, such that it is disconnected from its journal’s website’s context, would you still be able to? Would you bother, if the article was interesting and useful? Really? The Loon rarely does.

That suggests another structural factor impinging on the ability to judge a journal: for most scholars nowadays, younger scholars especially, the lion’s share of the journal literature they read is found and accessed divorced from its journal context. Typical discovery modes: Google Scholar, disciplinary indexes, and one-search-box library services (“discovery layers” for the academic librarians among us). Typical access modes: library full-text databases (with link resolvers, when they work, bypassing the journal’s larger web presence to go straight to full-text), open access repositories, the open web, and samizdat such as Sci-Hub and #icanhazpdf. Note, by the way, that several of these methods bypass journal and publisher branding near-altogether; the reader only sees whatever is stamped on the article itself.

Convenient, usually, but not terribly helpful for building a sense of which journals have the good stuff, much less which journals the upwardly mobile publish in. It does not seem coincidence that the confident journal-judgers have been senior scholars; when they were coming up through the ranks, they had rather more exposure to journal and publisher branding. These days—well, the Loon can tell a Haworth journal from miles away, their typography is so cramped and unfriendly, but she would likely fare quite poorly at a detect-the-publisher-brand-from-the-article game. She strongly doubts she is alone in this.

One more current phenomenon muddies the waters yet further: institution-level assessment and analytics. The Loon’s Boring Alter Ego’s workplace for some time had a list of “acceptable journals” posted to one of its physical bulletin boards. When the Loon went looking for that list today, it was no longer there, but the Loon did find out its origin: her campus’s purchased productivity analytics package. Moreover, it appears that the said package distinguishes between “research” journals and “service” (usually applied and/or professional) journals, the former of course (this is academia! research über alles!) weighing more in the analytics package’s algorithms than the latter.

(The Loon cannot even muster outrage about this arrant nonsense any more, at least not on her own behalf. She has never published in what that package considers a research journal. She still makes bold to say that what her Boring Alter Ego has published has done as much or more work in her profession and even its allied research discipline as what her colleagues have published. Journal prestige, bah, humbug. Service journals worth less than research journals, bah, humbug forever.)

So why even learn to judge journals as a graduate student or junior scholar, if you and your department are hostage to a list handed down from on high? Certainly it takes less time to pick a target or two from the list than to navigate the dozens, hundreds, or even thousands of available possibilities!

In short, the Loon does not think “decide whether an unfamiliar journal is trash or treasure” nearly the airy doddle many seem to, and “publish only in familiar journals” is not advice that furthers the gold open-access movement. We might not want blacklists, but we may need them.