Planet Cataloging

October 19, 2017


How OCLC transformed a library … and one student’s life


Throw down the gauntlet

At the beginning of the 1992–1993 school year, I issued a challenge to teachers, students, administrators, and community members around the Ovid-Elise Area Schools in Michigan. Our small, rural library, which supported two elementary schools, a middle school, and a high school, had recently joined OCLC and for the first time had access to libraries worldwide through WorldCat. Even though our materials budget was tiny, I stood up in the first district staff meeting of the year and promised them all I would get any book that anyone needed for any reason.

Access to the world’s knowledge transforms lives.
Click To Tweet

The teachers whispered and even snickered. Our library had never been very relevant to them. We weren’t included in their lesson plans, and they rarely sent students to find resources. After a couple weeks, I got my first request: a 17-book bibliography. And that changed everything.

Make the library relevant

Through WorldCat, I was able to supply all 17 books with interlibrary loan, to that teacher’s great surprise. And she told everyone. Pretty soon, we were getting all sorts of requests. I was borrowing and supplying books for student research papers, of course, but also to support hobbies, leisure reading, and even the graduate work that some teachers and other community members were doing. Through OCLC, I had access to the collections of major research universities, which we’d never imagined before. This was before the internet, of course.

Once I showed what the library was capable of, attitudes about it were transformed. Teachers invited me to their classrooms to talk about databases and research skills—and I could tell that they were learning along with the students. And since they recognized the library’s value, they started sticking up for the library budget. We went from a materials budget of $4,000 in 1992 to more than $100,000 in a few years. I didn’t have to fight for it—others saw how important the library was, and they fought for me.

Impact one life

Although it was great to see the library gain such credibility throughout the region, I’ll always remember the personal effect it had on one student. This young man went to the Assistant Principal’s office to tell him that he was quitting school. The Assistant Principal asked him, “What could we do to keep you in school?”

The young man thought for a bit, and then answered, “I’ll stay in school if I can build a kayak.”

So, the Assistant Principal took him down to see the shop teacher. Although happy to help, the shop teacher didn’t have any plans for building a kayak. So then, they headed to the library.

I opened WorldCat on the computer—it was the old black screen with the green text, and this young man was fascinated by it. We looked together and found some books on building kayaks, which I requested through interlibrary loan. When they arrived, he started building his kayak in shop and finished the rest of his classes. Honestly, I’m not sure he would have a high school diploma today if it wasn’t for those books we got though WorldCat, which transformed his future.

George Bishop won the grand prize in OCLC’s 1997 essay contest, “What the OCLC Online Union Catalog Means to Me.” This video was played during the awards ceremony.

You have to sell it

OCLC opened up the world to the people in my little school district. It made me realize that the smaller your library, the more you need OCLC. You simply don’t have the budget or the staff to provide everything people are going to need without using WorldCat as an interlibrary loan resource.

But I also realized that it wasn’t enough to simply provide library services and resources—I had to sell it to the people I wanted to use the library. Once I convinced them that the library could be valuable to them, though, they helped spread the word. We had all sorts of local people relying on our small library because of OCLC.

As OCLC celebrates its 50th anniversary this year, I just can’t imagine how many other lives have been transformed by this access to the world’s knowledge. And how many librarians have improved the lives of library users by providing the right resources at the right time—be they graduate-level text books or plans for building a simple kayak.

The post How OCLC transformed a library … and one student’s life appeared first on OCLC Next.

by George Bishop at October 19, 2017 02:02 PM

October 17, 2017

First Thus

Kirkus withdraws starred review after criticism.

Source: Kirkus withdraws starred review after criticism.

Another instance of popular action ended in a retraction: this time not an article but a book review. When I read the review, it had already been changed so I looked for it in the Wayback Machine of the Internet Archive but it isn’t there, so all I can see is the edited review.

I did however, find a reply from the author of the book, Laura Moriarty, who quotes from the review:

“I was encouraged last week when Kirkus Reviews gave American Heart a starred review (starred as in ‘this is great!’ not one star like the mad people on Goodreads), calling it a “moving portrait of an American girl discovering her society in crisis, desperate to show a disillusioned immigrant the true spirit of America.” The Kirkus reviewer, an observant Muslim and a woman of color, called the book “sensible, thought-provoking, and touching . . and so rich that a few coincidences of plot are easily forgiven.” (Okay, okay, fine, I’ll take it.)”

I cannot find these quotes in the revised review, so these sections were apparently edited out. Apparently, these are some of the parts that made people angry.

But I can only assume. To understand the anger behind these instances, the originals must be retained somewhere–otherwise no one can understand what provoked the original anger and make their own decisions.

UPDATE: According to a private email, the author of the book, Laura Moriarty, has kept a copy of the review on her own website.

Thanks for the info! And people can decide for themselves whether the original review needed revision.


by James Weinheimer at October 17, 2017 02:48 PM

October 16, 2017

Terry's Worklog

MarcEdit 7 alpha weekly build

Following changes were made:

  • Bug Fix: Export Tab Delimited Records (open file and save file buttons not working)
  • Enhancement: XML Crosswalk wizard — enabled root element processing
  • Bug Fix: XML Crosswalk wizard — some elements not being picked up, all descendants should now be accounted for
  • Bug Fix: Batch Process Function – file collisions in subfolders would result in overwritten results.
  • Enhancement: Batch Processing Function: task processing uses the new task manager
  • Enhancement: Batch Processing Function: tasks can be processed as subdirectories

I had intended to move the program into beta this Sunday, the above issues made me decide to keep it in alpha for one more week while I finish checking legacy forms/code.

Downloads can be retrieved from:


by reeset at October 16, 2017 04:33 AM

October 14, 2017

Resource Description & Access (RDA)

Resource Description and Access RDA

Resource Description and Access (RDA) is a new library cataloging standard, successor to AACR2… … ... Read original article in Librarianship Studies & Information Technology blog: RDA: Resource Description and Access Subscribe to Librarianship Studies & Information Technology YouTube channel:

by Salman Haider ( at October 14, 2017 12:48 PM

October 13, 2017

Resource Description & Access (RDA)

Library and Information Science Dissertations and Theses

Read original article in Librarianship Studies & Information Technology blog: Library and Information Science Dissertations and Theses

by Salman Haider ( at October 13, 2017 11:41 AM

First Thus

Academic Article Withdrawn Following “Serious and Credible” Threats of Violence – Quillette

Academic Article Withdrawn Following “Serious and Credible” Threats of Violence – Quillette

This article is about an academic paper with controversial (i.e. somewhat positive) views on colonialism. As this article mentions, Taylor and Francis withdrew this paper because of threats of violence against the author and the journal editor.

I wonder what this means? Are libraries supposed to withdraw their own printed copies? Has the digitized version disappeared from the databases?

I also ponder what such a trend might mean for libraries, which mostly have FAR more controversial materials than this article.


by James Weinheimer at October 13, 2017 10:11 AM

October 12, 2017

TSLL TechScans

OCLC supports changing FAST terminology but says LCSH must take the lead

In Andrew Pace's OCLC Next post dated 14 September 2017, he addresses the discussion around changing the term "Illegal Aliens" in OCLC's Faceted Access to Subject Terminology (FAST). Pace is the Executive Director, Technical Research at OCLC. 

He states that OCLC supports the change in terminology but is committed to work with the Library of Congress (LC) and the Program for Cooperative Cataloging (PCC) and will not be making any changes to terminology without LCSH changes. As puts he it, "FAST has no history of sweeping editorial changes in headings based on pervasive cultural change without first seeing those changes in the LCSH headings from which FAST is derived." After explaining the basics of FAST, he reiterates, "FAST has always been downstream of LCSH changes and the governance of headings that occurs through the PCC Subject Authority Cooperative Program (SACO)...We have no plans to establish a FAST governance model similar to SACO, nor an independent editorial group similar to that at the Library of Congress. FAST will follow LC’s lead."

As of October 2017, there has been no change in the heading but it is not likely this debate is over. As Pace points out, "Librarians are the most proactive professionals I have ever witnessed when it comes to identifying an opportunity for positive change and aggressively seeking a solution."

See the full article at:

by (Rachel Purcell) at October 12, 2017 06:30 PM

October 11, 2017

Terry's Worklog

MarcEdit Delete Field by Position documentation

I was working through the code and found an option that quite honestly, I didn’t even know existed.  Since I’m creating new documentation for MarcEdit 7, I wanted to pin this somewhere so I wouldn’t forget again.

A number of times on the list, folks will ask if they can delete say the second field in a field group.  Apparently, you can.  In the MarcEditor, select the Add/Delete field tool.  To delete by position, you would enter {#} to denote the position to delete in the find.

Obviously, this is pretty obscure – so in MarcEdit 7, this function is exposed as an option


To delete multiple field positions, you just add a comma.  So, say I wanted to delete fields 2-5, I would enter: 2,3,4,5 into the Field Data box and check this option.  One enhancement that I would anticipate a request for is the ability to delete just the last option – this is actually harder than you’d think – in part, because it means I can’t process data as it comes in, but have to buffer it first, then process, and there are some reason why this complicates things due to the structure of the function.  So for now, it’s by direct position.  I’ll look at what it might take to allow for more abstract options (like last).


by reeset at October 11, 2017 04:13 PM

TSLL TechScans

eBooks in the Law Library

A recent article in Inside Higher Ed asked if medical schools still need books. The question of the role of eBooks in all types of libraries has been batted around in some form or another since the advent of eBooks. While the Inside Higher Ed article settles on familiar answers and case studies of paperless and hybrid libraries, it seems clear that all libraries are arcing slowly toward having eBooks as a substantial part of their collections.

Law and Technology Resources for Legal Professionals, LLRX, is addressing the state of eBooks in law libraries in a three-part series. The first part, published this week, gives a helpful overview of some the challenges and opportunities that come with adding eBooks to law library collections. Of particular interest to technical services librarians is the section on acquiring eBooks. Various platforms and modes of purchase are discussed. The article also briefly touches on issues related to integrating eBooks into the library's existing technological infrastructure.

The second article in the series promises to delve deeper into eBook acquisitions issues. The third part will present some case studies of how various law libraries have added eBooks to their collections.

by (Travis Spence) at October 11, 2017 12:35 AM

October 10, 2017


It’s time to reinvent the collective collection


This year, we are celebrating the cooperative’s 50th anniversary. In 1967, the Ohio library community changed the way they worked together to share their catalogs. It was truly a reinvention of cataloging, resource sharing and library discovery.

Today, as we begin our next 50 years, we are at another turning point that requires a new, even bolder vision. We are building on WorldCat, now the definitive global library collection, to provide library members, groups and regional and national partners even greater capacity to build, manage, and curate the collective collection.

The biggest picture

For years, OCLC Research has been at the center of industry-wide work that seeks to understand and plan for the evolution of library collections. We’ve been exploring trends such as the shift from locally owned to jointly managed print library collections. Several recent reports delve deeply into the subject, including Right-scaling Stewardship and Understanding the Collective Collection.

The conclusion? We anticipate that a large part of existing US print collections, distributed across many libraries, will move into coordinated or shared management in the near future. Interest in shared print management reflects a growing awareness that long-term preservation of the published record can be organized as a collective effort.

As print collections move into a shared environment, stacks are giving way to reimagined library spaces. These historic transformations require new methods for thinking about and managing collections.

A global approach to print management

To meet these needs, OCLC is bringing together the best tools, technology, and talent to provide a new approach for building and managing libraries’ collective collection. Our strategy encompasses all elements of shared print workflows—cooperative infrastructure, collection analysis, retention commitments, and quick and efficient resource sharing. It will enable regional, statewide, and even national holdings management for monographs.

This new approach starts with the global WorldCat data network, which already provides a comprehensive view of many regional and national collections. It is the only set of library data really able to manage and secure libraries’ record of human knowledge for future generations.

From a technical standpoint, we will build on the capabilities of Sustainable Collection Services to further analyze WorldCat and help libraries make the decisions needed on where and what print to keep for a national collection. Capabilities from our resource sharing services will be leveraged to allow for new resource sharing practices that reflect network-level commitments and resources. OCLC’s recent investments in state-of-the-art analytics capabilities helps guide us as we build new services so libraries can make decisions for cooperative collection development. And this will all happen on the WorldShare platform, which is already used by hundreds of libraries for cataloging and ILL services.

OCLC is providing a new approach for building and managing libraries’ collective collection.
Click To Tweet

One such innovation we have recently announced is a shared print registration service that expands our shared print capabilities and enables libraries to preserve unique content by identifying protected monograph titles in shared print initiatives using WorldCat. A streamlined process of registering retention commitments will make the shared collection available and help it grow much more quickly.

This new capability will be included in a full OCLC cataloging subscription at no extra charge.

Continually reinventing the collective collection

For five decades, WorldCat supported libraries as they built their print collections … and it’s now becoming a vital tool as we begin to reconfigure these print collections and operationalize a new collective collection across consortia, across regions, and ultimately across countries.

We are excited about this vision and I invite you to view a short video with more details about our plans. Working together, we can significantly accelerate our efforts in collection management and shared print projects.

In the coming months, we will reach out to involve the community in a dialogue to help build this future. Together, we can make sure that the collective collection grows and changes to support libraries and the communities they serve over the next fifty years.

The post It’s time to reinvent the collective collection appeared first on OCLC Next.

by Mary Sauer-Games at October 10, 2017 05:35 PM

Coyle's InFormation

Google Books and Mein Kampf

I hadn't look at Google Books in a while, or at least not carefully, so I was surprised to find that Google had added blurbs to most of the books. Even more surprising (although perhaps I should say "troubling") is that no source is given for the book blurbs. Some at least come from publisher sites, which means that they are promotional in nature. For example, here's a mildly promotional text about a literary work, from a literary publisher:

This gives a synopsis of the book, starting with:

"Throughout a single day in 1892, John Shawnessy recalls the great moments of his life..." 

It ends by letting the reader know that this was a bestseller when published in 1948, and calls it a "powerful novel."

The blurb on a 1909 version of Darwin's The Origin of Species is mysterious because the book isn't a recent publication with an online site providing the text. I do not know where this description comes from, but because the  entire thrust of this blurb is about the controversy of evolution versus the Bible (even though Darwin did not press this point himself) I'm guessing that the blurb post-dates this particular publication.

"First published in 1859, this landmark book on evolutionary biology was not the first to deal with the subject, but it went on to become a sensation -- and a controversial one for many religious people who could not reconcile Darwin's science with their faith."
That's a reasonable view to take of Darwin's "landmark" book but it isn't what I would consider to be faithful to the full import of this tome.

The blurb on Hitler's Mein Kampf is particularly troubling. If you look at different versions of the book you get both pro- and anti- Nazi sentiments, neither of which really belong  on a site that claims to be a catalog of books. Also note that because each book entry has only one blurb, the tone changes considerably depending on which publication you happen to pick from the list.

First on the list:
"Settling Accounts became Mein Kampf, an unparalleled example of muddled economics and history, appalling bigotry, and an intense self-glorification of Adolf Hitler as the true founder and builder of the National Socialist movement. It was written in hate and it contained a blueprint for violent bloodshed."

Second on the list:
"This book has set a path toward a much higher understanding of the self and of our magnificent destiny as living beings part of this Race on our planet. It shows us that we must not look at nature in terms of good or bad, but in an unfiltered manner. It describes what we must do if we want to survive as a people and as a Race."
That's horrifying. Note that both books are self-published, and the blurbs are the ones that I find on those books in Amazon, perhaps indicating that Google is sucking up books from the Amazon site. There is, or at least at one point there once was, a difference between Amazon and Google Books. Google, after all, scanned books in libraries and presented itself as a search engine for published texts; Amazon will sell you Trump's tweets on toilet paper. The only text on the Google Books page still claims that Google Books is about  search: "Search the world's most comprehensive index of full-text books." Libraries partnered with Google with lofty promises of gains in scholarship:
"Our participation in the Google Books Library Project will add significantly to the extensive digital resources the Libraries already deliver. It will enable the Libraries to make available more significant portions of its extraordinary archival and special collections to scholars and researchers worldwide in ways that will ultimately change the nature of scholarship." Jim Neal, Columbia University
I don't know how these folks now feel about having their texts intermingled with publications they would never buy and described by texts that may come from shady and unreliable sources.

Even leaving aside the grossest aspects of the blurbs and Google's hypocrisy about its commercialization of its books project, adding blurbs to the book entries with no attribution and clearly not vetting the sources is extremely irresponsible. It's also very Google to create sloppy algorithms that illustrate their basic ignorance of the content their are working with -- in this case, the world's books.

by Karen Coyle ( at October 10, 2017 11:43 AM

Terry's Worklog

MarcEdit 7 weekly build

Issues completed as part of the MarcEdit 7 weekly update  Couple of things to highlight. 

  • * I’ve integrated a check and download of a Unicode Font into the Startup Wizard.  This will enable users to retrieve and install the Noto Fonts set into a private fonts collection for use by the application.
  • * Clustering tools are now available as a stand alone tool
  • * New Translations
  • * Lots of bug fixes

Thanks to all the folks that are downloading the alpha and trying it out.  Most of the bug reports are directly related to user testing.

  1. Enhancement: All processes: Updated Temp file management
  2. Bug Fix: Plugin Manager failing because it’s missing a column for MarcEdit version (note, none of the current plugins will work with MarcEdit 7)
  3. Enhancement: Added new languages for Croatian, Estonian, Indonesian, Hungarian, and Vietnamese
  4. Enhancement: Offer download into private font collection the Noto fonts when no Unicode font is present. This will make the fonts *only* available for use with MarcEdit.
  5. When Editing a task list — could the list not refresh? This occurs when you have a theme defined. 
  6. Bug Fix: Update all the Z39.50/SRU databases (specifically — the lc databases point to the old voyager endpoint that I believe is turned off)
  7. Bug Fix: Working with Saxon, XSLT transformations that link to files with spaces or special characters fail
  8. Bug Fix: Clustering Tool — selecting a top level cluster would include # of records in the cluster, not just the data to copy
  9. Enhancement: Clustering Tools — Add to the Main Window as a stand-alone tool
  10. Bug Fix: On install, the file types are not associated
  11. Enhancement: New Font’s dialog to support private fonts collections
  12. Bug Fix: Fonts not sticking when using the startup wizard
  13. Enhancement: Added Unicode Font download to help
  14. Bug Fix: Z39.50/SRU downloads were only downloading as .mrk formatted data, not as binary MARC. The Tool has been updated to select download type by extension.
  15. Enhancement: Updated the Icon a bit so that it’s not so transparent on the desktop.

Finally, I recorded and uploaded a video demonstrating the new startup wizard options related to the unicode fonts.  Please see:

The download can be retrieved from the MarcEdit 7 alpha/beta downloads page:

Questions, let me know.


by reeset at October 10, 2017 05:01 AM

October 08, 2017

Terry's Worklog

Saxon.NET and local file paths with special characters and spaces

I thought I’d post this here in case this can help other folks.  One of the parsers that I like to use is Saxon.Net, but within the .net platform at least, it has problems doing XSLT or XQuery transformations when the files in question have paths with special characters or spaces (or if they reference files via xsl:include statements that live inside paths with special characters or spaces).  The question comes up a lot on the Saxon support site and it sounds like Saxon is actually processing the data correctly.  Saxon is expecting valid URIs, and a URI can’t have a spaces.  Internally, the URI is escaped, but when you process those escaped paths against a local file system, accessing the file will fail.  So, what do I mean – here are two different types of problems I encounter:

  • Path 1: c:\myfile\C#\folder1\test.xsl
  • Path2: c:\myfile\C#\folder 1\test.xsl

When setting up a transformation using Saxon, you setup a XSLTransform.  You can set this up using either a stream, like an XMLReader, or a URI.  But here the problem.  If you create the statement like this:

System.Xml.XmlReader xstream = System.Xml.XmlReader.Create(filepath);
transformer = xsltCompiler.Compile(xstream).Load();

The program can read Path 1, but will always fail on Path 2, and will fail on Path 1 if it includes secondary data.  If rather than using a stream, I use a URI class like:

transformer = xsltCompiler.Compile(new Uri(sXSLT, UriKind.Absolute)).Load();

Both Path’s will break.  On the Saxon list, there was a suggestion to create a sealed class, and to wrap the URI in that class.  So, you’d end up code that looked more like:

transformer = xsltCompiler.Compile(new SaxonUri(new Uri(sXSLT, UriKind.Absolute))).Load();

public sealed class SaxonUri : Uri
        public SaxonUri(Uri wrappedUri)
            : base(GetUriString(wrappedUri), GetUriKind(wrappedUri))
        private static string GetUriString(Uri wrappedUri, bool localuri = false)
            if (wrappedUri == null)
                throw new ArgumentNullException("wrappedUri", "wrappedUri is null.");            
            if (wrappedUri.IsAbsoluteUri) 
                return wrappedUri.AbsoluteUri;
            return wrappedUri.OriginalString;
        private static UriKind GetUriKind(Uri wrappedUri)
            if (wrappedUri == null)
                throw new ArgumentNullException("wrappedUri", "wrappedUri is null.");
            if (wrappedUri.IsAbsoluteUri)
                return UriKind.Absolute;
            return UriKind.Relative;
        public override string ToString()
            if (IsWellFormedOriginalString())
                return OriginalString;
            else if (IsAbsoluteUri)
                return AbsoluteUri;
            return base.ToString();

And this get’s a closer.  Using this syntax, Path 1 doesn’t work, but Path 2 will.  So, you could use an if…then statement to look for spaces in the XSLT file path, and if there are no spaces, open the stream, and if there are, wrap the URI.  Unfortunately, that doesn’t work either – because if you include a reference (like xsl:include) in your XSLT, Path 1 and Path 2 fail, because internally, the BaseURI is set to an escaped version of the URI, and Windows will fail to locate the string.  At which point, you end up feeling like you might be pretty much screwed, but there are still other options but they take more work.  In my case, the solution that I adopted was to create a custom XmlResolver.  This allows me to handle all the URI processing myself, and in the case of the two path statements, I’m interested in handling all local file URIs.  So how does that work:

xsltCompiler.XmlResolver = new CustomeResolver();
transformer = xsltCompiler.Compile(new Uri(sXSLT, UriKind.Absolute)).Load();

internal class CustomeResolver : XmlUrlResolver
        public override object GetEntity(Uri absoluteUri, string role, Type ofObjectToReturn)
            if (absoluteUri.IsFile)
                string filename = absoluteUri.LocalPath;
                if (System.IO.File.Exists(filename)==false) {
                    filename = Uri.UnescapeDataString(filename);
                    if (System.IO.File.Exists(filename)==false)
                        return (System.IO.Stream)base.GetEntity(absoluteUri, role, ofObjectToReturn);
                    } else
                        System.IO.Stream myStream = new System.IO.FileStream(filename, System.IO.FileMode.Open);
                        return myStream;
                } else
                    return (System.IO.Stream)base.GetEntity(absoluteUri, role, ofObjectToReturn);

                return (System.IO.Stream) base.GetEntity(absoluteUri, role, ofObjectToReturn);

By creating your own XmlResolver, you can fix the URI problems and allow Saxon to process both use cases above.


by reeset at October 08, 2017 04:27 AM