-->
Gavia Libraria

Thank you

A book came out yesterday in which the Loon’s Boring Alter Ego had been outed.

The online open-access version has already been fixed.

The Loon is deeply, pathetically grateful to the book’s editorial and production team. She had frankly expected stonewalling and pushback. Instead, she received sympathy and impressively immediate action. (The Loon, remember, has done typesetting. She knows that her ask was not an insignificant one!)

Thank you, very much, from the bottom of the Loon’s wizened avian heart.


In which the Loon soliloquizes

(The rhythm of the Shakespeare speech cannot
survive intact this Loonly recasting;
if you would be so kind as to forgive,
the angry Loon will soon return to prose.
She mentions only that the noble Brembs
did not originally use the word
“clerical,” but “menial” to sum the work of such
as Loons, librarians, and preservation wonks.
Oh, nor is this the first or only time
that Brembs has wantonly accused the work
of publishing, scholcomm, and data pros
of being facile, vain, and valueless.
In short, the Loon has had it with this shit.)

Serfs, IT, librarians, lend me your ears;
I come to bury our work, not to praise it.
The research that men do lives after them;
The rest of it is merely clerical;
So we are assuredly told. The noble Bjorn B
Hath told you that this work is clerical:
If it were so, it was a grievous fault,
And grievously must we peons suffer for it.
Here, under leave of Bjorn B and the rest—
For Bjorn B is an academic man;
So are they all, all academic men—
Come I to share this widespread disrespect.
I spent much time to learn the work I do:
But Bjorn B hath said that it is clerical;
And Bjorn B is an academic man.
I have salvaged many datasets from death
And shepherded them to a safer home
Does this complex work to you seem clerical?
When that the law hath changed, I have explained:
And “clerical” is condescending dross:
Yet Bjorn B says our work is clerical;
And Bjorn B is an academic man.
You all have seen the standards wilderness
which is my job to tame and make of use,
That, too, must change, its data too: is this clerical?
Yet Bjorn B says that it is clerical;
And, sure, he is an academic man.
I speak not to disprove what Bjorn B spoke,
But here I am to speak what I do know.
The open access movement disdains all
And greatly thins its own thin ranks thereby.
O judgment! thou art fled from learnèd jerks!
And needed work, unvalued, can’t go on,
While nasty scornful oafs proclaim it clerical,
And I must pause till someone give it worth.


Cynical in the right ways: eLife’s peer-review choice

So, eLife is changing its peer-review process:

eLife’s peer-review process is changing. From January 2023, eLife will no longer make accept/reject decisions after peer review. Instead, every preprint sent for peer review will be published on the eLife website as a “Reviewed Preprint” that includes an eLife assessment, public reviews, and a response from the authors (if available).

The more the Loon thinks about this, the more she finds it cynical in the best ways, the right ways.

Let us by all means be clear: peer review as presently practiced is a disaster. It’s trivially gameable, horrifically biased, trivially corruptible for ax-grinding or academic feuds or any number of other less-than-admirable ends, and utterly ineffective at keeping garbage out of The Litrachoor. The people at eLife clearly know all this—admirable by itself; too many academics and even librarians don’t—and to change their process in the ways they have, they must have decided a lot of it is tied to the high-stakes go/no-go decision. So they surgically removed that power from reviewers. Cynical and clever.

Does this mean, then, that eLife will inevitably become a dumpster fire, overrun by trash that would never make it past a proper review? Oh, no, not at all. Public reviews should quite neatly do for that—the sort of academic cockroach who games peer review to publish plagiarized or p-hacked or image-diddled or corporately-gamed or otherwise beyond-unacceptable work expects to scuttle behind confidential review. Cockroaches should flee public review like the—well, roaches they are. Review cartels and fake review scams will have at least some trouble persisting.

Moreover, public reviews also stab several styles of ax-grinding to the heart: the feuding scholar who has been handed their enemy’s work to review; the racist, sexist abomination (the Loon will not call these excrescences “scholars”); the senior scholar who no-goes anything that questions or invalidates their prior work; and the scholar of any vintage who uses peer review as a power trip. They also, in point of fact, protect junior-scholar reviewers who make honest negative assessments of more senior scholars’ work—no more who-said-what-ing in private email, the arguments can be evaluated on their merits.

So cynical and so clever, eLife. Well done.

What’s more, eLife can now experiment with building (or installing already-built) detection mechanisms (automated and human) for modern academic sins that peer reviewers rarely catch: data problems, image-diddling, p-hacking, and the like. This removes a burden peer reviewers should honestly never have had to carry and don’t have time or tools for. It’s right that this style of detection should happen at the preprint-server/journal level. The Loon will watch eLife for signs of evaluation innovation with considerable interest.

This leaves peer reviewers to do what good ones do best: assess and help improve the work. As always, the Loon dips her beak in respect to the many reviewers who have improved the Boring Alter Ego’s publications.


Cur OCLC delenda est? or, the Loon, the witch, and the audacity of this Pritch

(Apologies for the post title, which is a groaner even for the Loon. The Loon simply could not resist it.)

The Loon, as she mentioned, has quite a few reasons to want to see OCLC staked and turned to dust, but let us get one out of the way as quickly as possible. Any organization, for-profit or non-, with the naked audacity to think hiring female furniture is a dandy livener for a professional conference reception needs to be buried and the earth over its grave salted. There is no excuse for that. It was and is sexist, dehumanizing, and wrong.

Enough of that repellent subject. (It’s not as though anything will be done about it at this late date. OCLC ignored social-media outcry at the time.) A little more technobabble-inflected history for you. Now, the Loon and her Boring Alter Ego are not and have never been catalogers (digital-collections metadata is more the BAE’s métier), so it is likely she will miss or mistake some details; she apologizes in advance for such solecisms. In the main, though, she thinks she can explain this pile of guano. Oh, and the Loon should perhaps mention, just for the most transparency possible, that her Boring Alter Ego was paid once to present at an OCLC event (at which, naturally, she bit the hand that was feeding her good and hard), and has been headhunted for positions at OCLC more than once (to the horrified amusement of some of the BAE’s work colleagues). The jobs would have meant immensely more money, to be sure, but the Loon’s conscience straitly forbids, as does her unshakable need to eschew workplaces that hire female furniture.

Shared cataloging did not begin with computers; thanks to early standardization of catalog and catalog-card size (and, it must be said, the early near-monopoly of one small-l loon named Melvil Dui), the Library of Congress began printing and selling catalog cards for US libraries sometime in the earlyish 1900s (the Loon isn’t sure exactly when; 1910 or 1920 maybe?). The efficiency of centralized record creation and card production should be obvious—every single library Library Handing or typing up a card for every single commodity-published book is a fairly ridiculous notion if there’s any other way to do it.

Libraries began the process of computerizing catalog records in the 1960s—likely even earlier, considering. (There exists a recently-revised book about corporate data management that contains the unaccountable sentence “An organization without metadata is like a library without a card catalog.” The Loon just… hasn’t the fight in her to tell its publisher what steaming guano the library side of that analogy is.) The data structure called MARC (for “machine readable cataloging”) designed by the peerless Henriette Avram shaped that transition, and is still (pace encoding changes, loosening record-length restrictions, a few other smallish tweaks, and the scourge that is “local [cataloging] practice”) in use in most US, UK, and Australian library systems today. Variants on MARC and MARC-based cataloging practice exist elsewhere in the world as well.

The Loon cannot bugle loudly enough that MARC was not designed for efficient information retrieval, neither searching nor browsing nor querying nor filtering nor faceting. It’s really quite bad at all that; ask any ILS developer, if you can abide the ensuing swearing. MARC was designed as a source format for mass-producing catalog cards. (The Loon does wonder sometimes what Avram knew about SGML, if anything. She might well have eschewed it for storage inefficiency. Will some enterprising Ph.D candidate in library history kindly get off their tail feathers and write a biography of Avram as their dissertation? It’s long past due.)

Mass production of catalog cards, easier record correction and updating, and sharing the cataloging load was the whole point of MARC. Remember, at this time photocopiers were not a thing, or at least not a thing within reach of most libraries. Unfortunately, libraries were already accustomed to mostly-centralized cataloging and computers were not common enough or well-enough networked at the time to build a viable peer-to-peer system, so OCLC swept right in to take the Library of Congress’s place as central record provider, swiftly becoming a de facto monopoly across most of the English-language-cataloging world.

It is no mystery how monopolies behave; economists are rather tiresomely repetitive on the subject. We see ourselves today in a situation where purported “non-profit” OCLC pays Skippy the Audacious Pritch over a million and a half a year (per the ICOLC report, which the Loon strongly recommends that you read), and sues everyone in sight, from a hotel with the temerity to paint Dewey numbers on its wall (OCLC also controls DDC) to a potential competitor (SkyRiver; long story the Loon won’t tell except to point out the anti-competitive behavior) to Clarivate/ExLibris. Charming people, OCLC. Just amazingly gracious and collaboration-minded folks, not threatening or self-dealing or absurdly entitled witches at all.

In whatever limited fairness the Loon can muster about all this, she will say that by and large she admires the work of OCLC Research. OCLC didn’t build the unit from scratch, though; it bought Research Libraries Group sometime or other in in the late oughts. Even so, a lot of good and worthwhile work has come out of that particular think tank, though the Loon will never understand why they employed the rather awful Jackie Dooley for a time. In fairness to the fairness, OCLC also has something of a track record of abandoning useful projects it can’t work out how to make money from, sometimes ones originating in OCLC Research; the way it dumped the PURL(.org) permalink scheme without a word beforehand to anyone relying on it was simply slipshod.

So OCLC’s main reason for existing is the tooling (“OCLC Connexion”), aggregation, correction, enhancement, sale (via subscription/membership model), and presentation (via WorldCat) of bibliographic and holdings records created by its member libraries. Yes, you read that right—unlike the Library of Congress back in the day, OCLC doesn’t itself catalog anything. Enjoy the parallels with scholarly communication! And yes, this also means that WorldCat’s name is a lie; it is not a truly global union catalog because many libraries, especially those where English is not a first or common national language, are not OCLC members. This is not to say that OCLC does not do real work; like even the laziest, most entitled of the big-pig publishers, it does. It is absolutely to say that OCLC, like the big pigs, holds libraries (collectively) to grossly elevated ransom hugely disproportionate to the actual work it does. The surplus seems to go to Skippy the Overpaid Pritch and lawsuits, mostly… and remember that the said surplus is extracted from libraries.

(There is a rant the Loon may yet rant on how often libraries blunder into this type of exploitation, partly due to inability to initiate, much less maintain, a proper commons. OCLC, big-pig publishers, the ILS, the institutional repository, many kinds of proprietary software, more… but not today.)

The key to this whole system, computationally, is an OCLC-specific identifier for bibliographic records called the “OCLC number.” (If any OCLC numbers turn up in MetaDoor, the Loon thinks Clarivate/ExLibris will be wholly unable to make any case in court that the records containing them didn’t come from OCLC, one way or another.) The OCLC number ties together record creation, record merging (i.e. of records from different catalogers/libraries/vendors describing the same “thing,” and please do not ask the Loon to define “thing” here because she will only weep copious linked identified FRBR-shaped res-and-nomen-flavored tears from her beady red eyes and no one wants that), record corrections and updates, and connections to other library systems and processes such as interlibrary loan. For probably-obvious reasons, this number, though it properly identifies only a bibliographic record, is often used as shorthand for whatever thing (see above about defining “thing”) the record describes.

Rather like—and if the Loon had hands she would be jazz-handsing—a linked-data URI. Like URIs for RDF, the OCLC number is the lynchpin of OCLC’s bibliographic enterprise. Indeed, if not for OCLC playing dragon-on-the-hoard, the OCLC number might have near-seamlessly evolved into the linked-data identifier for the biblioverse, for the objects (not to say “things”) within its purview. As it is, any competing system, especially one with an eye to linked-data friendliness, will have to whomp up a whole new record identifier. The Library of Congress can’t easily step in here; its holdings and recordset aren’t nearly as extensive as OCLC’s. Wikidata, for all its curious and generally delightful boldness, is not a bibliographic-record database, and (from what little the Loon understands about Wikibase) might well not scale to one without the servers falling over dead.

With that for background, what is it that MetaDoor is supposed to be, and how is it supposed to compete? Huge caveat for the ensuing discussion: the Loon has no insider knowledge, and Clarivate/ExLibris is playing its cards close to its chest at present. The Loon is of necessity making some educated guesses here.

Like OCLC, MetaDoor is intended to be a database of bibliographic records contributed by library catalogers. Unlike OCLC, MetaDoor is (to start, anyway) not playing dragon-on-hoard; the records will be open-licensed such that any given record up to the entire database is takeable and forkable. A later enclosure play is quite possible—“records contributed until now are CC0; henceforth, we are pulling an OCLC, such that we own whatever you put in, and you buy it back from us.” There would be outcry, but Clarivate/ExLibris need simply bet that libraries are too foresightless, cheap, and fighty to work out how to fork the database and collectively maintain and add to it—and the Loon must say, that is a very smart bet on C/ExL’s part.

Conspicuously missing from what the Loon has seen about MetaDoor is any mention at all of the sort of record deduplication, correction, and enhancement processes that OCLC routinely performs on contributed records. Bluntly: MetaDoor will be a wild abyss of near-total chaos. The Loon doesn’t think Clarivate/ExLibris (which, after all, builds a major ILS) harbors any delusions that the quality of contributed records will be high, or even uniform. Instead, she suspects that libraries will be gently encouraged toward a sort of peer-to-peer copy-cataloging system, in which catalogers look for libraries that do good work and set up their systems to adopt those libraries’ records. If MetaDoor is thinking toward linked data, another way to approach this would be to start breaking down MARC records into granular datapoints that could be queried to suit, or built up into decent-enough MARC records. (If the FRBRoids had actually known anything about real-world relational database design, which they did not, this breaking-down and reconstitution could have begun two decades ago, but once again, here we are. Some days the Loon just despairs of librarians, or at least librarian standardistas.) The other consequence of this free-for-all is that MetaDoor will not easily, or perhaps at all, be able to build an analogue to WorldCat.

Other developers might, however, if they are willing and able to take on MetaDoor’s chaos and withstand the probably-inevitable lawsuit from OCLC. Other developers might do a lot of things, possibly quite useful and attractive things, with the data in the MetaDoor database. At least to start, Clarivate/ExLibris will be happy to let them! Any win for MetaDoor chips away at OCLC’s de facto monopoly. Beware the day OCLC folds, however; the logical business thing for Clarivate/ExLibris to do then is pull a Twitter, destroying useful APIs or charging through several available orifices for access to them.

In the Loon’s trawl through the abovementioned corporate data management book, she learned that there is a business term for the kind of chaotic mess MetaDoor is likely to be: “data lake.” Throw all the data in, forget about quality, just dump it in and see what falls out. Over time, if the data in the lake is at all useful, busy IT beavers will start cleaning up and organizing the data they’re interested in, prodding data creators into better data-quality practices, separating out coherent chunks of data into data marts, building data marts up into data warehouses, and so on. Is Clarivate/ExLibris cynically hoping that library developers will cheerfully fix up MetaDoor’s data lake at no cost to it? Seems likely. Also a decent bet, the Loon thinks—cheerful librarian fixers are a big part of how OCLC became what it is, and ExLibris constantly dumps a ton of uncompensated quality-control, usability-testing, accessibility, assessment, development-strategy, and other work on systems librarians as it is.

If both OCLC and MetaDoor sound like grossly exploitative and unfair systems, well, this is why the Loon wishes a pox on both their houses. Is there a path out?

If the Loon ruled libraryland, she would pull together a bunch of library CIOs (they’re not often called that, but they do exist) and knock their heads together until they agreed to cough up the collective funding and development/systems effort to mirror (well, mirror plus delta) MetaDoor data on a regular basis, and provide value-add services such as APIs at low (including sweat-equity) or no cost. She would then pull together a bunch of library cataloging luminaries and knock their heads together until they agreed to build record pipelines into the CIOs’ record commons alongside whatever other pipelines they have—transparently-documented pipelines that do not invoke the wrath of Skippy the Litigious Pritch and his horde of slavering lawyers.

That way, if MetaDoor tries enclosure, a replacement record commons will already exist (and not have to be built from scratch in a tearing hurry) and the cutover for libraries and their catalogers will be a lot less painful than it would otherwise be. As that commons becomes cleaner and more sophisticated, even if MetaDoor doesn’t try enclosure, it will become an increasingly viable, likely less-expensive alternative to both MetaDoor and OCLC. Virtuous circle!

But the Loon does not rule libraryland, so we shall all have to wait and see.


Stoning Goliath

As soon as the Loon heard about Clarivate/ExLibris’s “MetaDoor” initiative, she knew OCLC would not be pleased. She placed a good many mental quatloos on a lawsuit, which has now materialized.

The Loon’s read of the lawsuit materials so far is “fishing expedition.” If OCLC had proof that WorldCat records had ended up in MetaDoor, it would gleefully have exhibited that proof to the court. It doesn’t. It’s fishing for the slightest shreds of evidence that Clarivate/ExLibris might have wink-wink-nudge-nudge hinted to its MetaDoor beta participants that copying over WorldCat records would be acceptable, or that MetaDoor does not make any effort to keep WorldCat records out. Lacking those, OCLC wants to make extraordinarily clear that WorldCat records just better not end up in MetaDoor.

What an ignominious development in the history of the MARC standard, which was explicitly designed for record sharing. If the Loon thought Skippy the Unconscionably Overpaid Pritch had any scruples whatever… but there, it’s clear he doesn’t after his video response to the cogent, clear ICOLC report on OCLC’s pencil-mustachioed villainy. (The said video has vanished, or the Loon would link thereto.)

The Loon has no particular love for Clarivate or ExLibris, it must be said, no more than for OCLC. A pox on both their houses; if this were merely a matter of Godzilla and Kong slugging it out for filthy lucre, she wouldn’t care who won. It’s not, though. There’s a David eyeing up the OCLC Goliath: libraries themselves, whom OCLC has extorted and hung out to dry for years, if not decades.

The Loon can’t say David doesn’t in some small part deserve Goliath’s ill-treatment. David could have seen the monopoly coming and worked against it, or built a properly communal alternative to it. It is a true pity (and embarrassment) that libraries are so bad at collaborating on matters involving technology; Hathi Trust is very nearly the field’s only success in that space. David could have forced Goliath into open licenses for WorldCat data. David could have yelled—librarians are quite good at yelling, actually—for Goliath to stop playing dog in manger. David sat around herding sheep catalog records and whistling, instead, while Goliath busily enclosed his pastures, started charging him usurious rent for their use, and stole his sheep records without recompense.

Parallels to scholarly communication are rather stark. Publishing is a set of services. So are catalog-record enhancement, correction, and aggregation. Yet both big-pig publishers and OCLC turned a service set into egregious rent-seeking through squatting on the results. (The analogy falls apart a bit when it comes to copyright, admittedly: the copyright status of catalog records is unclear and disputed, and outside Europe OCLC would have no hope of making intellectual-property arguments about WorldCat as a whole or its record enhancements and corrections—which are indisputably real!—in particular.)

All that said, it is likely in David’s best interest for Clarivate/ExLibris to win this specific lawsuit. MetaDoor has been designed, as best the Loon can tell, to be open enough for forkability: CC0 licensing, peer-to-peer record sharing, and so forth. That doesn’t completely remove the possibility of Clarivate/ExLibris pulling a 1990s-Microsoft “embrace, extend, extinguish,” of course—but the point is, OCLC already has a de facto monopoly desperately in need of breaking. If MetaDoor can accomplish that, more power to MetaDoor. Can it? The Loon cannot read judges’ minds, ergo dares not opine.

The Loon also thinks there is a third path here. It is a steep, rocky, difficult path, but a better one than either MetaDoor or WorldCat: ditching MARC altogether for a linked-data cataloging infrastructure. Smartly built, this could supplant and ultimately destroy current-generation catalog recordstores and make the embrace-extend-extinguish strategy far more difficult to re-implement. Let OCLC and Clarivate/ExLibris sue each other into extinction over MARC-based assets inexorably shrinking in value. A pox on both their houses.

Such a replacement infrastructure obviously cannot spring forth like Minerva from the head of Jove. Worse still, it faces significant headwinds, some technical but most organizational. Technical headwinds include:

  • the utter badness and unimplementability of linked-data standards, from RDF through OWL to SPARQL and pray do not mention the excrescence that is RDF/XML in the Loon’s presence—linked data standards were just badly designed and the Loon is entirely done pretending otherwise
  • the sparseness, gappiness, and badness of linked-data tooling, both inside and outside librarianship
  • the genial badness of BIBFRAME’s design, and the grotesque worseness (if that is a word) of its toolsets; competitor RIC-O achieves the signal distinction of an even worse design than BIBFRAME
  • the global multiplicity of linked-data models for bibliographic description—catalogers are, for good or ill, accustomed to One Standard to Rule Them All
  • the near-nonexistence of crosswalking tools from MARC to any of the available linked-data models and back (which is, in fairness, a hard problem; MARC is a disaster, computationally)

The organizational headwinds—beyond the staringly obvious “most libraries and librarians have been all-but-impossible to move off MARC”—are even more complicated. One blockage is NISO, run by a wannabe Skippy the Pritch named Todd Carpenter, which nominally owns the USMARC standard and is adamantly (if quietly) opposed to any move away from it, as that would diminish NISO’s importance (such as it is) to libraries. (The Loon and her Boring Alter Ego are adamantly opposed to NISO’s continued existence for many other reasons not germane to this post, she feels she should say. The BAE is not at all quiet about this, so the Loon need not yodel much on the topic.) The Library of Congress should be a natural leader in the move away from MARC, but largely is not; the Loon cannot prove that NISO is partly or wholly responsible, but based on the BAE’s prior direct experience with NISO, she suspects it to be.

Another headwind lies in the concentrated near-monopoly integrated library system (ILS) market. ExLibris absolutely has the technical and UX chops to architect a linked-data catalog, and the market muscle to impose such an ILS organizationally, but to do so would mean severe depreciation in its investment in MetaDoor, so the Loon can fairly safely say it’s unlikely to happen. Of the few other survivors (there is no other word) in this market, the Loon thinks none of them can pull this off. Even the open-source Evergreen and Koha, which have no reason the Loon can fathom to oppose the notion, don’t have the money or developers to accomplish it.

A steep, rocky, difficult path, without question. What is the way to climb it? Is there a way?

Technically, the Loon sees little to be done presently about the rotten and rotting standards edifice underlying linked data. The Loon would cheerfully serve on a committee to replace the whole putrefying stack with something that actually works—designed with tables rather than triples in mind (this ship has sailed: the real world runs on tables, not directed acyclic graphs), in a syntax as essentially human-reader-friendly as Turtle (which is quite good), keeping the URL/URI identifier system (which was a genuine stroke of brilliance), integrating a validation system (including a requirements/constraints language and a spec for data-validation tools based on that langauge), building a query system that (unlike SPARQL) doesn’t knock servers over dead at the least typo and is not an invitation to DDoS attacks, and so on. (The Loon’s service would likely be quite similar to that of the I’m Bored Girl at the IgNobels: repeating “this is unimplementable; fix it” over and over until it’s not.) But that’ll be the day. We shall have to make do with what we have.

Technical possibilities do exist for smoothing the path, however. MARCEdit, were Terry Reese so inclined, could build even more linked-data tooling than it already has. The most useful innovation would be a linked-data retrieval service configurable to pull bibliographic linked data from any and all peer catalogs or other sources, with shareable configuration/translation/ETL files to spread the load of adding new queryable sources. Presently, the Loon believes, MARCEdit only works with the Library of Congress, and only for specific kinds of author and subject URI reconciliation. More is emphatically possible, even now, and what the Loon just proposed would lessen centralized datastores’ stranglehold on cataloging by making them provably replaceable.

With NISO likely stalemating the Library of Congress over BIBFRAME, the Loon would look to European national libraries—those few that aren’t abandoning their linked-data experiments, anyway, which is another severe organizational wound to linked bibliographic data—for sources of bibliographic and authority linked data. Evergreen and Koha could get into the game also, by making linked data bibliographic-record representations (most likely in JSON-LD, which the Loon dislikes but accepts as the best of available serialization options) available on all catalog pages. This embraces and extends MetaDoor’s notion of peer-to-peer record sharing—more so if they also build configuration/translation/ETL files for the Loon’s posited MARCEdit retrieval system.

The above enhancements, if successful, would likely gel into a pragmatic de facto cooperative-cataloging standard set. Once they have, pushing the said enhancements through a lightweight standards group like IETF would be a good idea. Or perhaps the Library of Congress could be prodded to adopt them. (Under no circumstances should NISO be allowed anywhere near them. NISO cannot be trusted.)

On the organizational side, the Loon sees some possibilities in the so-called “collective collection” in academic librarianship. (Yes, she is sensible of the irony in mentioning a construct named and popularized by OCLC Research.) The ripples if, for example, the Big Ten Academic Alliance were to swap collective-collection records among their ILSes in linked data instead of MARC would be substantial. That being a tall ask, a smaller but still-useful one would be a BTAA-specific linked-data aggregation of its collective-collection records, something like a Hathi Trust for bibliographic and authority linked data. A working model for consortium-level linked-data exposure and use is important, in the grand scheme of things. LibraryThing is another natural ally here; if their TinyCat were to both consume and produce linked data, that would be a game-changer.

Book and ebook vendors fear and hate MARC (and for good reason). The Loon thinks they could be inveigled into a less-persnickety system of making granular publication data web-available for their customers to snarf up via the Loon’s posited toolset. Indie and self-publishers would also cheerfully climb on board, were their platforms and tools capable of it—a WordPress plugin would go a long way here, as would a browser plugin that massages Amazon listings into a suitable catalog import. Google doesn’t care about books these days, but it seems just barely possible to talk them into turning their developer feed into a suitable catalog input.

In other words—and the Loon is grinning a beaky grin just now, because her BAE said exactly this to a roomful of library linked-data people about a decade ago—make it easy to rely on linked data, easier than it is to rely on MARC, and the library world will shift, from the smallest and poorest libraries upward… and David will at last stone Goliath to death with his linked-data slingshot.