We are very pleased to announce two new changes to our popular Book Display Widgets product which we think will help libraries of all kinds show off more of their collections!
Responding to several requests, we have added the ability to use your own cover source in Book Display Widgets. If you have a specialized collection or some unusual titles, now you can supplement our default cover service with your own covers.
Go from this:
For more information on how to use Custom Covers in Book Display Widgets, you can see the instructions here or check out our quick how-to video, here:
Recognizing that not every item in your library carries an ISBN, we have added two new options for data in Book Display Widgets: UPCs and ISSNs. These additions will allow you to show off your video collections and your academic journals with ease!
UPCs are product codes commonly found on items such as DVDs, music, and video games. Help your patrons discover their new favorite movie!
ISSNs are for journals. You can use these to highlight your robust journal collection to your students!
For more information on using these new data-sources, please see our instruction page or view this quick how-to video.
On Monday 25th November, the CIG event on Linked Data: what cataloguers need to know took place in Birmingham. It was a very full day and thank you to everyone who came along to find out more. Special thanks also to our speakers: Tom Meehan, Owen Stephens, Corine Deliot and Celine Carty.
Links to all of the presentations from the day have been made available on the CIG web pages. This should be a useful reference to all those who attended on the day as well as being interesting reading for anyone who wished they had been there. Thanks also to all who tweeted from the event. All the enthusiastic attendees meant the whole day had a great buzz about it and we hope it was useful to everyone there.
We are currently assessing the feedback forms, so if you were there on the day and haven’t yet filled in the feedback survey then please do, as we will use that information to help inform and improve future events.
Struggling towards usable linked data services at SWIB13
Paraphrasing some of the challenges proposed by keynote speaker Dorothea Salo, the unofficial theme of the SWIB13 conference in Hamburg might be described as “No more ontologies, we want out of the box linked data tools!”. This sounds like we are dealing with some serious confrontations in the linked open data world. Judging by Martin Malmsten’s LIBRIS battle cry “Linked data or die!” you might even think there’s an actual war going on.
Looking at the whole range of this year’s SWIB pre-conference workshops, plenary presentations and lightning talks, you may conclude that “linked data is a technology that is maturing” as Rurik Greenall rightly states in his conference report. “But it has quite a way to go before we can say this stuff is ready to roll out in libraries” as he continues. I completely agree with this. Personally I got the impression that we are in a paradoxical situation where on the one hand people speak of “we” and “community”, and on the other hand they take fundamentalist positions, unconditionally defending their own beliefs and slandering and ridiculing other options. In my view there are multiple, sometimes overlapping, sometimes irreconcilable “we’s” and “communities”. Sticking to your own point of view without willingness to reason with the other party really does not bring “us” further.
This all sounds a bit grim, but I again agree with Rurik Greenall when he says that he “enjoyed this conference immensely because of the people involved”. And of course on the whole the individual workshops and presentations were of a high quality.
Before proceeding to the positive aspects of the conference, let me first elaborate a bit on the opposing positions I observed during the conference, which I think we should try to overcome.
Developers disagree on a multitude of issues:
Developers hate MARC. Everybody seems to hate RDF/XML, JSON-LD seems to be the thing for RDF, but some say only Turtle should be used, or just JSON.
Tools and languages
Perl users hate Java, Jave users hate PHP, there’s Python and Ruby bashing.
Create your own, reuse existing ones, yes or no upper ontologies, no ontologies but usable tools.
Windows/UNIX/Linux/Apple… it’s either/or.
Open source vs. commercial software
Need I say more?
Belgians hate German beer, or any foreign beer for that matter.
(Not to mention PDF).
OK, I hope I made myself clear. The point is that I have no problem at all with having diverse opinions, but I dislike it when people are convinced that their own opinion is the only right one and refuse to have a conversation with those who think otherwise, or even respect their choices in silence. The developer “community” definitely has quite a way to go.
Apart from these internal developer disagreements I noticed, there is the more fundamental gap between developers and users of linked open data. By “users” I do not mean “end users” in this case, but the intermediary deployers of systems. Let’s call them “libraries”.
Linked Data developers talk about tools and programming languages, metadata formats, open source, ontologies, technology stacks. Librarians want to offer useful services to their end users, right now. They may not always agree on what kind of services and what kind of end users, and they may have an opinion on metadata formats in systems, but their outlook is slightly different from the developers’ horizon. It’s all about expectations and expectation management. That is basically Dorothea Salo’s keynote’s point. Of course theoretical, scientific and technical papers and projects are needed to take linked data further, but libraries need linked data tools, focused on providing new services to their end users/customers in the world of the web, that can easily be implemented and maintained.
In this respect OCLC’s efforts to add linked data features to WorldCat is praiseworthy. OCLC’s Technology Evangelist Richard Wallis presented his view on the benefits of linked open data for libraries, using Google’s Knowledge Graph as an example. His talk was mainly focused at a librarian audience. At SWIB, where the majority of attendees are developers or technology staff, this seemed somewhat misplaced. By chance I had been present at Richard’s talk at the Dutch National Information Professional annual meeting two weeks earlier, where he delivered almost the same presentation for a large room full of librarians. There and then that was completely on target. For the SWIB audience this all may have been old news, except for the heads up about OCLC’s work on FRBR “Works” BIBFRAME type linked data which will result in published URIs for Works in WorldCat.
An important point here is that OCLC is a company with many library customers worldwide, so developments like this benefit all of these libraries. The same applies to customers of one of the other big library system vendors, Ex Libris. They have been working on developing linked data features for their so called “next generation” tools since some time now, in close cooperation with the international user groups’ Linked Open Data Special Interest Working Group, as I explained in the lightning talk I gave. Also open source library systems like Koha are working on adding linked open data features to their tools. It’s with tools like these, that reach a large number of libraries, that linked open data for libraries can spread relatively quickly.
In contrast to this linked data broadcasting, the majority of the SWIB presentations showed local proprietary development or research projects, mostly of high quality notwithstanding. In the case of systems or tools that were built all the code and ontologies are available on GitHub, making them open source. However, while it is commendable, open source on GitHub doesn’t mean that these potentially ground breaking systems and ontologies can and will be adopted as de facto standards in the wider library community. Most libraries, both public and academic, are dependent on commercial system and content providers and can’t afford large scale local system development. This also applies up to a point to libraries that deploy large open source tools like Koha, I presume.
It would be great if some of these many great open source projects could evolve into commonly used standard tools, like Koha, Fedora and Drupal, just to name a few. Vivo is another example of an open source project rapidly moving towards an accepted standard. It is a framework for connecting and publishing research information of different nature and origin, based on linked data concepts. At SWIB there was a pre-conference “VivoCamp”, organised by Lambert Heller, Valeria Pesce and myself. Research information is an area rapidly gaining importance in the academic world. The Library of the University of Amsterdam, where I work, is in the process of starting a Vivo pilot, in which I am involved. (Yes, the Library of the University of Amsterdam uses both commercial providers like OCLC and Ex Libris, and many open source tools). The VivoCamp was a good opportunity to have a practical introduction in and discussion about the framework, not in the least by the presence of John Fereira of Cornell University, one of the driving forces behind Vivo. All attendees (26) expressed their interest in a follow-up.
Vivo, although it may be imperfect, represents the type of infrastructure that may be needed for large scale adoption of linked open data in libraries. PUB, the repository based linked data research information project at Bielefeld University presented by Vitali Peil, is aimed at exactly the same domain as Vivo, but it again is a locally developed system, using another smaller scale open source framework (LibreCat/Catmandu of Bielefeld, Ghent and Lund universities) and a number of different ontologies, of which Vivo is just one. My guess is that, although PUB/LibreCat might be superior, Vivo will become the de facto standard in linked data based research information systems.
Instead of focusing on systems, maybe the library linked data world would be better served by a common user-friendly metadata+services infrastructure. Of course, the web and the semantic web are supposed to be that infrastructure, but in reality we all move around and process metadata all the time, from one system and database to another, in order to be able to offer new legacy and linked data services. At SWIB there was mention of a number of tools for ETL, which is developer jargon for Extract, Transform, Load. By the way, jargon is a very good way to widen the gap between developers and libraries.
There were pre-conference workshop for the ETL tools Catmandu and Metafacture, and in a lightning talk SLUB Dresden, in collaboration with Avantgarde Labs, presented a new project focused on using ETL for a separate multi-purpose data management platform, serving as a unified layer between external data sources and services. This looks like a very interesting concept, similar to the ideas of a data services hub I described in an earlier post “(Discover AND deliver) OR else”. The ResourceSync project, presented by Simeon Warner, is trying to address the same issue by a different method, distributed synchronisation of web resources.
One can say that the BIBFRAME project is also focused on data infrastructure, albeit at the moment limited to the internal library cataloguing workflow, aimed at replacing MARC. An overview of the current state of the project was presented by Lars Svensson of the German National Library.
The same can be said for the National Library of Sweden’s new LIBRIS linked data based cataloguing system, presented by Martin Malmsten (Decentralisation, Distribution, Disintegration – towards Linked Data as a First Class Citizen in Libraryland). The big difference is that they’re actually doing what BIBFRAME is trying to plan. The war cry “Linked data or die!” refers to the fact that it is better to start from scratch with a domain and format independent data infrastructure, like linked data, than to try and build linking around existing rigid formats like MARC. Martin Malmsten rightly stated that we should keep formats outside our systems, as is also the core statement of the MARC-MUST-DIE movement. Proprietary formats can be dynamically imported and exported at will, as was demonstrated by the “MARC” button in the LIBRIS user interface. New library linked data developments will have to coexist with the existing wider library metadata and systems environment for some time.
Like all other local projects, the LIBRIS source code and ontology descriptions are available on GitHub. In this case the mere scope of the National Library of Sweden and of the project makes it a bit more plausible that this may actually be reused on a larger scale. At least the library cataloguing ontology in JSON-LD there is worth having a look at.
To return to our starting point, the LIBRIS project acknowledges the fact that we need actual tools besides the ontologies. As Martin Malmsten quoted: “Trying to sell the idea of linked data without interfaces is like trying to sell a fax without the invention of paper”.
The central question in all this: what is the role of libraries in linked data? Developers or implementers, individually or in a community? There is obviously not one answer. Maybe we will know more at SWIB14. Paraphrasing Fabian Steeg and Pascal Christoph of hbz and Dorothea Salo, next years theme might be “Out of the box data knitting for great justice”.
Field books (primary source documents that are created during field research and that are of big importance for natural history) are unique because they come in a variety of formats and material types. The recent D-Lib Magazine features an article by Sonoe Nakasone and Carolyn Sheffield, “Descriptive metadata for field books: methods and practices of the Field Book Project”. In an earlier post I talked about the project, but this article now goes into more detail regarding the descriptive metadata used for the Field Book Registry. It was decided that these items are going to be described both on the collection and the item level. Metadata schemas from the museum, archives and library communities were chosen for this task: Natural Collections Description (NCD) is used for collection level records and Metadata Object Description Schema (MODS) for item level records, with Encoded Archival Context (EAC) being used for authority records of collectors, organization and expeditions. These schemas are combined into one database, the Field Book Registry. Explicit connections are established between collection, item and authority records via IDs, and controlled vocabularies like the Thesaurus of Geographic Names (TGN) or LCSH enrich the records. The articles closes with screenshots of the cataloging interface and with mentioning some challenges and future developments.
Part 2 of the report on the 2013 National Acquisitions Group Conference in York (the papers are available here to members – http://www.nag.org.uk/events/2013/07/nag-conference-2013/)
Adopting RDA by Stuart Hunt was an updated version of a paper I’d heard previously at other events, adapted for non-specialist cataloguers. The key points were that going live with RDA would be a sequence of events for most libraries, with many of the timings being dictated by the timetables of external record sources and suppliers. Other significant issues would be how to manage unavoidable hybridity and data in multiple environments (“classic catalogues”, discovery layers).
RFID Update: Mick Fortune surveyed “the evolving RFID landscape”, concentrating on new applications, new concerns and new standards. Using RFID only for access control, membership smartcards and security (as most libraries in the UK currently are) was, he said, “Like buying a smartphone and using it to make calls”. Development has been inhibited by being driven by RFID suppliers, not libraries, a lack of involvement from LMS suppliers and a lack of agreed standards for data or frequencies.
New applications in development worldwide include stock management, supply chain monitoring and mobile apps that interact with stock. Cooperative working has, though, promoted the adoption of common standards. For instance, a UK initiative – LCF (Library Communication Framework) (version 1 published in September) - aims to standardise exchange between LMS, RFID and third party apps.
Privacy continues to be a concern (Libraries will be obliged to complete PIAs in 2014) as does the discovery that RFID tags are potentially vulnerable to alteration by smartphones.
BIC, UKSLC and Accreditation by Simon Edwards explained the history, structure and role of BIC (for those who weren’t aware of it): jointly set up by CILIP, the BL, the Booksellers’ and Publishers’ Associations, it works to establish shared standards among all those incolved in the supply chain. He also outlined the Accreditation process (which we at the City have achieved) and introduced UKSLC (UK Standard Library Categories). Formerly known as eLibraries, these are versions of the BIC subject categories adapted to organise the stock in libraries and provide subject access, though Edwards stressed that “they are not a substitute for Dewey“. Most cataloguers would be surprised by the assertion that “patrons have changed because of the internet” in that they now want to “search by subject” (is there anything new about this?) and it wasn’t clear to me what he envisaged the relationship between UKSLC and classification should be.
David Stoker, in a heavily visual presentation, described the lengthy and challenging process of renovating the Liverpool Central Library and the PFI initiative that financed it. The new library is an undeniably impressive achievement and has apparently proved hugely popular with its users. This promotional video gives as idea of what it is like …
Lastly Ben Showers introduced the National Monographs Strategy initiative from JISC, designed to answer the question “should libraries be collecting the same books as each other?” Presumably the implied answer is “no” and the question is being asked in the context of potentially replacing physical collections by space-saving e-resources rather than simply a revival of co-operative purchasing schemes such as the old MSC.
The co-design pilot project is running for six months and is due to end in December 2013. Showers explained that is based on the principles of “thinking in the open“, being “evidence based” and “community-led” and “iteration not repetition”. Involvement from all interested parties (potentially all libraries with any kind of research function) is actively encouraged via their blog, which is intended to be the main focus for the project and is here … http://bit.ly/monographsuk . So do have a look and feel free to contribute!
The titles of these conferences do tend to be designed to attract attention by snagging on contemporary concerns, rather than providing a coherent theme. “Sharing today, securing tomorrow” would, perhaps, suggest one thing to a public librarian (in the context of “shared services”) and it was interesting to be reminded of the different meanings that it might have in an academic or research context.
If there was a thread through these apparently disparate papers it might have been the question of how to foster sharing and co-operation in the absence of the kind of centralised, top-down governmental intervention represented by the Public Library Standards (and, I suppose, the national library websites for Scotland and Wales). Ben Showers’ community-led and crowdsourced approach certainly offers a theoretical alternative, and it will be interesting to follow its progress.
Richard, Jennifer and I (Joy) were presenters at the 2013 Illinois Library Association Annual conference back in October. Our session was the oh so ever popular topic of RDA.
We described the session as: Want to know what has changed in the MARC record with the implementation of RDA? An overview highlighting the differences, from the simple to the complex. We will look at cataloging examples in a variety of formats.
Thanks to Clarke who pointed out in a comment on a recent post here, that Google Scholar now has a “saved citations” citation management feature.
I haven’t done any experimenting with it; anyone have a review? What do you think, is this going to end up drawing a significant portion of our patron’s use away from other citation management alternatives (including some we pay for)?
Today we’re launching Scholar Library, your personal collection of articles in Scholar. You can save articles right from the search page, organize them by topic, and use the power of Scholar’s full-text search & ranking to quickly find just the one you want – at any time and from anywhere. You decide what goes into your library and we’ll provide all the goodies that come with Scholar search results – up to date article links, citing articles, related articles, formatted citations, links to your university’s subscriptions, and more. And if you have a public Scholar profile, it’s easy to quickly set up your library with the articles you want – with a single click, you can import all the articles in your profile as well as all the articles they cite.
In WebDewey, when a number is displayed in the "hierarchy box," above it are displayed its parent, its parent's parent, its parent's parent's parent, etc., while below it are displayed all the numbers for which it is the parent. Sometimes you might be a little surprised by what you see or don't see there. This is particularly the case when spans are involved -- and more particularly the case when overlapping spans are involved.
For example, when 220-290 is displayed, we find above it 200; we find below it 220, 230-280, and 290.
When 230-280 becomes the focal notation, we find above it 200 and 220-290; we find below it 230-270 and 250-280.
When 230-270 becomes the focal notation, we find above it 200, 220-290, and 230-280; we find below it 230 and 240.
The situation at 230-270 (as compared to the situation at 250-280) may seem odd, but it is, in fact, correct. After all, there is an entry for 230-270 Specific elements of Christianity, and its parent is 230-280, as shown.
But why are 250, 260, and 270 not shown as children of 230-270? Because a DDC notation can have only one parent designated in the MARC classification record (153 field, subfield e). The two spans, 230-270 and 250-280, have equal claim as candidate parents for 250 and 260; the two spans, 230-270 and 270-280, have equal claim as candidate parents of 270. But in each case only one of the candidates can be designated the parent. It so happens that 250-280 has been designated the parent of both 250 and 260, while 270-280 has been designated the parent of 270.
The problem arises because of overlapping spans. The graphic below shows the overlapping of spans that occurs in the 200s:
As can be seen, 230-270 overlaps with both 250-280 and 270-280. Fortunately, there are very few cases of overlapping spans. But when you see something in a hierarchy that at first glance looks odd, consider whether any spans are involved, and remember that any DDC number or span of numbers can have only one parent.
Couple of updates related to the RDA Helper and Task generator. Here’s the change list:
Additionally, I reposted the API documentation (that was removed when my Oregon State Accounts were deleted). They can be found at: http://marcedit.reeset.net/software/api_docs/
|MARC 21 FIELD /|
|240||a||Uniform title||6.2.2||Preferred Title for the Work|
|240||a||Uniform title||6.3||Form of Work|
|240||a||Uniform title||6.4||Date of Work|
|240||a||Uniform title||6.5||Place of Origin of the Work|
|240||a||Uniform title||6.6||Other Distinguishing Characteristic of the Work|
|240||d||Date of treaty signing||6.4||Date of Work|
|240||f||Date of a work||6.10||Date of Expression|
|240||g||Miscellaneous information||6.22||Signatory to a Treaty, etc.|
|240||k||Form subheading||6.2.2||Preferred Title for the Work|
|240||l||Language of a work||6.11||Language of Expression|
|240||m||Medium of performance for music||6.15||Medium of Performance|
|240||n||Number of part/section of a work||6.2.2||Preferred Title for the Work|
|240||n||Number of part/section of a work||6.3||Form of Work|
|240||n||Number of part/section of a work||6.4||Date of Work|
|240||n||Number of part/section of a work||6.5||Place of Origin of the Work|
|240||n||Number of part/section of a work||6.6||Other Distinguishing Characteristic of the Work|
|240||n||Number of part/section of a work||6.16||Numeric Designation of a Musical Work|
|240||o||Arranged statement for music||6.12||Other Distinguishing Characteristic of the Expression|
|240||p||Name of part/section of a work||6.2.2||Preferred Title for the Work|
|240||p||Name of part/section of a work||6.3||Form of Work|
|240||p||Name of part/section of a work||6.4||Date of Work|
|240||p||Name of part/section of a work||6.5||Place of Origin of the Work|
|240||p||Name of part/section of a work||6.6||Other Distinguishing Characteristic of the Work|
|240||r||Key for music||6.17||Key|
|240||s||Version||6.12||Other Distinguishing Characteristic of the Expression|
Without institutional support, without worrying about legality, she just…. archived for posterity.
In a storage unit somewhere in Philadelphia, 140,000 VHS tapes sit packed into four shipping containers. Most are hand-labeled with a date between 1977 and 2012, and if you pop one into a VCR you might see scenes from the Iranian Hostage Crisis, the Reagan Administration, or Hurricane Katrina.
It’s 35 years of history through the lens of TV news, captured on a dwindling format.
It’s also the life work of Marion Stokes, who built an archive of network, local, and cable news, in her home, one tape at a time, recording every major (and trivial) news event until the day she died in 2012 at the age of 83 of lung disease.
…There weren’t any provisions for the tape collection in Stokes’s will, but anyone who knew her knew she wanted them to be used as an archive. She had been born at the beginning of the Great Depression, and like many people of her generation, saved a lot of things. Scattered throughout the family’s various properties, she had stored a half-century of newspapers and 192 Macintosh computers. But the tapes were special. “I think my mother considered this her legacy,” Metelits says.
The Incredible Story Of Marion Stokes, Who Single-handedly Taped 35 Years Of Tv News
from 1977 To 2012, She Recorded 140,000 Vhs Tapes Worth Of History. Now The Internet Archive Has A Plan To Make Them Public And Searchable. Sarah Kessler. fastcompany.com.
Another article I was alerted to on HackerNews. I’ve noticed that the audience on HackerNews is very interested in library-and-archive type issues (whether involving actual libraries and archives or not, but frequently so), as well as generally quite supportive of actual libraries, archives, librarians, and archivists. I worry some of the support is more nostalgic than anything else, or maybe aspirational is a better way to think of it, supportive of what libraries and librarians could be doing. (Not to take away from the awesome stuff the Internet Archive is doing).
|MARC 21 FIELD /|
|250||a||Edition statement||2.5.2||Designation of Edition|
|250||a||Edition statement||2.5.6||Designation of a Named Revision of an Edition|
|250||b||Remainder of edition statement||2.5.3||Parallel Designation of Edition|
|250||b||Remainder of edition statement||2.5.4||Statement of Responsibility Relating to the Edition|
|250||b||Remainder of edition statement||2.5.5||Parallel Statement of Responsibility Relating to the Edition|
|250||b||Remainder of edition statement||2.5.6||Designation of a Named Revision of an Edition|
|250||b||Remainder of edition statement||2.5.7||Parallel Designation of a Named Revision of an Edition|
|250||b||Remainder of edition statement||2.5.8||Statement of Responsibility Relating to a Named Revision of an Edition|
|250||b||Remainder of edition statement||2.5.9||Parallel Statement of Responsibility Relating to a Named Revision of an Edition|
MarcEdit, the popular free library metadata software suite, includes new functionality that will enable OCLC member libraries the ability to contribute and enhance their bibliographic and holdings data within WorldCat while using MarcEdit.