What language is add a word or short phrase if necessary to clarify the role of a person, family, or corporate body named in a statement of responsibility? RDA 22.214.171.124
1. language/script of the resource
2. language of the agency
Russlan und Ludmila : Oper in 5 Aufzügen /Mikhail I. Glinka ; [editado por] M. Balakirew und S. Liapunow
On 20/04/2015 23.30, Gene Fieg wrote:
> I think this gets back the whole issue of abbreviations. Which is more > understandable to a patron, anglophone or not: 3rd edition, 3rd ed., or > third edition? Our goal should be to communicate the WEMI as clearly as > possible to the patron.
I just figure that if someone is hopelessly confused by 3rd edition, 3rd ed., or third edition, they’ll never be able to figure out 95% of the rest of a bibliographic record. The edition is relatively simple compared to what a series is, how subjects work, finding out what is the “correct” form of a corporate body, and so on. I think the *form* of the edition is unimportant to the users, who– it is true–very definitely want edition information, but they see all kinds of forms an edition statement can take in the real world. They can figure this out.
The real difficulty is for machines: a human can easily see that these variant “textual strings” all represent the same concept (3rd edition), but these are the sorts of practices that drive computers *crazy*! If one of the purposes is to get computers to merge/sort records in various ways, and one way will be by using edition information (I, for one, hope they will use edition information!), then what is most important parts is to have consistent data. For an extreme example, programmers would love an “edition” field, and catalogers would just add, e.g.: 3
or whatever the number happens to be. In fact, this seems to be how Amazon does it. Take a look at http://www.amazon.com/Foundations-Mathematics-Ian-Stewart/dp/019870643X/, and we see: “Publisher: Oxford University Press; 2 edition (May 1, 2015)” but if you “Look inside” the book itself it is all written out “Second edition”.
Doing it the Amazon way would make it childishly simple for the programmers to work with, but of course, catalogers know that edition statements can be a *lot* more complicated than that. Catalogers need at least some kind of freedom to input that complexity.
But still, consistency is absolutely vital if computers are going to work their magic by merging records and so on. This is one of the basic complaints I have had with RDA: for a long time now, the catalog record has provided certain areas of consistency, and in the case of editions, this type of consistency goes back much longer than most other parts of the record. (As an example, here is the “Catalogue of Books in the Mercantile Library, of the City of New York” (1866) found in Google Books, with the search for “ed.” http://bit.ly/1HPV9YR. Scroll through the results, and in with people being the editors, you’ll see lots of edition statements that use the abbreviation)
RDA breaks that consistency with edition statements and as a result, actually *creates* problems where they did not exist before. Merging on edition statements, which can vary *far more widely* with RDA, is only one example of what happens when you break that consistency. In fact, I don’t know how merging could be done now. Today, the computer will have to be programmed to “know” that “3rd edition, 3rd ed., 3d ed. third edition” and probably several more text strings, are actually the same. And catalogers know there are lots more variations than that. Earlier, there were at least numbers to merge on, and consistently input abbreviations. Now, it’s gone, and the complexity goes way up.
This goes for most other parts of the bibliographic record too.
James Weinheimer email@example.com First Thus http://blog.jweinheimer.net First Thus Facebook Page https://www.facebook.com/FirstThus Cooperative Cataloging Rules http://sites.google.com/site/opencatalogingrules/ Cataloging Matters Podcasts http://blog.jweinheimer.net/cataloging-matters-podcasts
Here are five more things:
The RDA Development Team started talking about developing training for the ‘new’ RDA, with a focus on the vocabularies, in the fall of 2014. We had some notion of what we didn’t want to do: we didn’t want yet another ‘sage on the stage’ event, we wanted to re-purpose the ‘hackathon’ model from a software focus to data creation (including a major hands-on aspect), and we wanted to demonstrate what RDA looked like (and could do) in a native RDA environment, without reference to MARC.
This was a tall order. Using RIMMF for the data creation was a no-brainer: the developers had been using the RDA Registry to feed new vocabulary elements into their their software (effectively becoming the RDA Registry’s first client), and were fully committed to FRBR. Deborah Fritz had been training librarians and other on RIMMF for years, gathering feedback and building enthusiasm. It was Deborah who came up with the Jane-athon idea, and the RDA Development group took it and ran with it. Using the Jane Austen theme was a brilliant part of Deborah’s idea. Everybody knows about JA, and the number of spin offs, rip-offs and re-tellings of the novels (in many media formats) made her work a natural for examining why RDA and FRBR make sense.
One goal stated everywhere in the marketing materials for our first Jane outing was that we wanted people to have fun. All of us have been part of the audience and on the dais for many information sessions, for RDA and other issues, and neither position has ever been much fun, useful as the sessions might have been. The same goes for webinars, which, as they’ve developed in library-land tend to be dry, boring, and completely bereft of human interaction. And there was a lot of fun at that first Jane-athon–I venture to say that 90% of the folks in the room left with smiles and thanks. We got an amazing response to our evaluation survey, and the preponderance of responses were expansive, positive, and clearly designed to help the organizers to do better the next time. The various folks from ALA Publishing who stood at the back and watched the fun were absolutely amazed at the noise, the laughter, and the collaboration in evidence.
No small part of the success of Jane-athon 1 rested with the team leaders at each table, and the coaches going from table to table helping out with puzzling issues, ensuring that participants were able to create data using RIMMF that could be aggregated for examination later in the day.
From the beginning we thought of Jane 1 as the first of many. In the first flush of success as participants signed up and enthusiasm built, we talked publicly about making it possible to do local Jane-athons, but we realized that our small group would have difficulty doing smaller events with less expertise on site to the same standard we set at Jane-athon 1. We had to do a better job in thinking through the local expansion and how to ensure that local participants get the same (or similar) value from the experience before responding to requests.
As a step in that direction CILIP in the UK is planning an Ag-athon on May 22, 2015 which will add much to the collective experience as well as to the data store that began with the first Jane-athon and will be an increasingly important factor as we work through the issues of sharing data.
The collection and storage of the Jane-athon data was envisioned prior to the first event, and the R-Balls site was designed as a place to store and share RIMMF-based information. Though a valuable step towards shareable RDA data, rballs have their limits. The data itself can be curated by human experts or available with warts, depending on the needs of the user of the data. For the longer term, RIMMF can output RDF statements based on the rball info, and a triple store is in development for experimentation and exploration. There are plans to improve the visualization of this data and demonstrate its use at Jane-athon 2 in San Francisco, which will include more about RDA and linked data, as well as what the created data can be used for, in particular, for new and improved services.
So, what are the implications of the first Jane-athon’s success for libraries interested in linked data? One of the biggest misunderstandings floating around libraryland in linked data conversations is that it’s necessary to make one and only one choice of format, and eschew all others (kind of like saying that everyone has to speak English to participate in LOD). This is not just incorrect, it’s also dangerous. In the MARC era, there was truly no choice for libraries–to participate in record sharing they had to use MARC. But the technology has changed, and rapidly evolving semantic mapping strategies [see: dcpapers.dublincore.org/pubs/article/view/3622] will enable libraries to use the most appropriate schemas and tools for creating data to be used in their local context, and others for distributing that data to partners, collaborators, or the larger world.
Another widely circulated meme is that RDA/FRBR is ‘too complicated’ for what libraries need; we’re encouraged to ‘simplify, simplify’ and assured that we’ll still be able to do what we need. Hmm, well, simplification is an attractive idea, until one remembers that the environment we work in, with evolving carriers, versions, and creative ideas for marketing materials to libraries is getting more complex than ever. Without the specificity to describe what we have (or have access to), we push the problem out to our users to figure out on their own. Libraries have always tried to be smarter than that, and that requires “smart” , not “dumb”, metadata.
Of course the corollary to the ‘too complicated’ argument lies the notion that a) we’re not smart enough to figure out how to do RDA and FRBR right, and b) complex means more expensive. I refuse to give space to a), but b) is an important consideration. I urge you to take a look at the Jane-athon data and consider the fact that Jane Austen wrote very few novels, but they’ve been re-published with various editions, versions and commentaries for almost two centuries. Once you add the ‘based on’, ‘inspired by’ and the enormous trail created by those trying to use Jane’s popularity to sell stuff (“Sense and Sensibility and Sea Monsters” is a favorite of mine), you can see the problem. Think of a pyramid with a very expansive base, and a very sharp point, and consider that the works that everything at the bottom wants to link to don’t require repeating the description of each novel every time in RDA. And we’re not adding notes to descriptions that are based on the outdated notion that the only use for information about the relationship between “Sense and Sensibility and Sea Monsters” and Jane’s “Sense and Sensibility” is a human being who looks far enough into the description to read the note.
One of the big revelations for most Jane-athon participants was to see how well RIMMF translated legacy MARC records into RDA, with links between the WEM levels and others to the named agents in the record. It’s very slick, and most importantly, not lossy. Consider that RIMMF also outputs in both MARC and RDF–and you see something of a missing link (if not the Golden Gate Bridge .
Not to say there aren’t issues to be considered with RDA as with other options. There are certainly those, and they’ll be discussed at the Jane-In in San Francisco as well as at the RDA Forum on the following day, which will focus on current RDA upgrades and the future of RDA and cataloging. (More detailed information on the Forum will be available shortly).
Don’t miss the fun, take a look at the details and then go ahead and register. And catalogers, try your best to entice your developers to come too. We’ll set up a table for them, and you’ll improve the conversation level at home considerably!
In honor of last week’s DAMNY conference, here are five things:
- How the AP uses rights metadata by Stuart Myles.
- Standards and metadata for DAM by Lisa Grimm.
- An interview with Heather Goodnow from Gaylord Archival on becoming a DAMster.
- More DAM knowledge from Travis McElroy from Ivie and Associates, a DAM Guru.
- Get a Certificate of DAM from the DAM Foundation.
On 4/13/2015 3:19 PM, Kathleen Lamantia wrote:
> BIBFRAME needs to be rigorously tested and studied to see if it delivers > on its promise.
I agree that testing needs to be done, but still, I think that creating a SPARQL endpoint(s) to provide developers who want to use library catalog data in BIBFRAME, can have only positive benefits.
Of course, there are pitfalls. There are no guarantees that developers will want to use our data; they have gotten along without our data for a long time and may see no need for it. Or it may turn out that the developers will not be able to create anything that their own public(s) will find useful. I’ve mentioned some others in earlier posts.
The answers to such practical questions are completely unknown and no research has been done to find out, at least to my knowledge. Still, I will grant that the only way to know if developers will want to use our information for their own purposes is to provide it to them in a format they can use. Setting up a SPARQL endpoint is not all that expensive–much less expensive than switching to RDA for instance or retooling our catalogs for FRBR-type structures. So I am all for BIBFRAME. I just think it should have been step 1 or 2 of the changes in cataloging and catalogs instead of one of the last, but that is “water under the bridge”.
As far as the “innards” of library catalogs changing substantially, and changing into directions of BIBFRAME, I see no reason why that has to happen–at least, not for a long time. Records can be created and maintained in our current catalogs just as they are today–maybe some minor changes will be needed occasionally–and then exported to whatever SPARQL endpoints the library chooses.
If the idea is to create more FRBR-like structures so that the author entity will be linked to the work entity, the work entity to the expression entity … (it all reminds me of that old song “Dry Bones” https://www.youtube.com/watch?v=hYeQUXXYvK0), then in that case, there probably would be major expenses.
James Weinheimer firstname.lastname@example.org
First Thus http://blog.jweinheimer.net
First Thus Facebook Page https://www.facebook.com/FirstThus
Personal Facebook Page https://www.facebook.com/james.weinheimer.35 Google+ https://plus.google.com/u/0/+JamesWeinheimer
Cooperative Cataloging Rules http://sites.google.com/site/opencatalogingrules/
Cataloging Matters Podcasts http://blog.jweinheimer.net/cataloging-matters-podcasts The Library Herald http://libnews.jweinheimer.net/
holistic technologies and prescriptive technologies. In the former, a practitioner has control over an entire process, and frequently employs several skills along the way...By contrast, a prescriptive technology breaks a process down into steps, each of which can be undertaken by a different person, often with different expertise.It's the artisan vs. Henry Ford's dis-empowered worker. As we know, there has been some recognition, especially in the Japanese factory models, that dis-empowered workers produce poorer quality goods with less efficiency. Brown has a certain riff on this, but what came immediately to my mind was the library catalog.
My colleague Roy Tennant has eloquently argued that "MARC Must Die". Certainly much of what he says is true, but it's been almost 13 years since that column and MARC, while weakening, still seems to be hanging on. I think there is actually more going on here than simply legacy software that won't go away, so I'd like to offer my less eloquent defense of MARC.
First what I'm defending here is underlying MARC Communications format (ANSI Z39.2/ISO 2709), or at least the XML/JSON equivalents, not full MARC-21 or other variants with their baggage of rules. Those rules and their encoding could also use some defense (after all they were all established after extensive debate) but that is a different essay.
What I like about MARC is its relative lack of predetermined structure beyond indicators, fields and subfields and conventions about using 3 digit numerics for tags, and a-z, 0-9 for subfield indicators. Admittedly, you are restricted with the two level field/subfield, but an amazing amount of bibliographic stuff fits (and anything can be shoved) into such a structure. The 5 digit length field is a serious limitation, but overcome with the translation to XML/JSON. I watched a video by Donald Knuth recently about "Constraint Based Composition" where he talks about constraints on form fostering creativity. Maybe MARC format's constraints may be just the sort of structure that makes metadata somewhat manageable, while still retaining some flexibility.
Contrast this to a MODS/MADS record. Here everything is laid out, with rules to make sure you don't modify the structure. When we started xA, a small authority file to feed into and control VIAF, MADS seemed a better fit for the rather simple authority records we expected to create than a full-blown MARC-21 authorities record. What I didn't realize was that bringing up a simple editor for MARC is a day or two's work (see mS, also known as the MARC Sandbox). A full MADS editor would be substantially more work. In fact xA's simple take on MADS was substantially more work. Which was OK, until we decided to change something. No longer was it just a matter of sticking in a new text field (and possibly modifying a table to allow it), but interface issues need to be faced, and changes made in the record display and editing forms.
All of a sudden, the slightly more work for a possibly friendlier form, turned very unfriendly for someone (me) trying not to spend any more time on the infrastructure than absolutely necessary. In mS you can actually cut and paste from a MARC display in other MARC tools and things 'just work'. Plus there is the familiarity with MARC which makes many people (at least where I work!) very comfortable. The ability to easily add fields and subfields to accommodate new needs appears to be much more important than a pretty interface. In fact, we attempted to extend xA to handle the needs of some Syriac scholars, but failed because of the amount of effort involved. What did they use instead? A very complex Excel spreadsheet! I've seen any number of attempts to fit bibliographic information into spreadsheets, and they do not work nearly as well as MARC.
The first version of the mS form was actually MARC, but with tags, indicators, subfields neatly separated into their own boxes. It turned out that the simple text field we are now using was both easier to work with (e.g. cut/paste) and substantially easier to get working smoothly.
I admit that using MARC seems quite retro. MARC is an old format that has held up surprising well, but is showing its age. Using it is a bit like using Emacs or Vi to create software. They are very 'close to the metal' and that's just what lots of people want.
This weekend, I got to partake in one of my favorite parts of being a faculty member — walking as part of the faculty processional for the 2015 Spring Commencement. For the past two years, I’ve been lucky enough to participate in the Spring Commencement at The Ohio State University — and while the weather can be a bit on the warm side, the event always leaves me refreshed and excited to see how the Libraries can find new ways to partner with our faculty, students, and new alumni; and this year was no different (plus, any event where you get to wear robes and funny hats is one I want to be a part of). Under the beating sun and unseasonable humidity, President Drake reminded all in attendance that while OSU is a world-class research and educational institution; our roots are and continue to be strengthened by our commitment as a land grant institution — to be an institution who’s core mission is to educate the sons and daughters of our great state, and beyond. And I believe that. I believe in the land grant mission, and of the special role and bond that land grant institutions have to their states and their citizens. So it was with great joy, that I found myself in Ohio Stadium to celebrate the end of one journey for the ~11,000 OSU graduates, and the beginning of another as these graduates look to make their own way into the future.
In my two years at OSU, one of the things you hear a lot at this institution is a commitment to “Pay it Forward”. I’ve found among the faculty, the staff, the alumni that number close to 1/2 a million — these aren’t just words but a way of life for so many who are a part of Buckeye Nation. Is this unique to Ohio State — no, but with so many alumni, the influence is much easier to see. You see it in the generosity of time, the long-term mentorship, the continued engagement with this institution — when you join Buckeye nation; you are joining one big extended family.
I find that it is sometimes easy to forget the role that you get to play as part of the faculty in helping our students be successful. It’s easy to get bogged down in the committee work, the tenure requirements, your own research, or the job of being faculty member at a research institution. It’s easy to take for granted the unique privilege that we have as faculty to serve and pay it forward to the next generation. Sitting at Ohio Stadium, with so many graduates and their parents and friends…yes, it’s a privilege. Congratulations class of 2015.