Planet Cataloging

September 21, 2014

Celeripedean

Jen

In several of my past posts, I’ve talked about change. In relation to the metadata services, I put out the survey because I feel that a change is needed where I work. More specifically, as users’ needs change, the way we help them, aka do business, also has to change. I’m not advocating for giving up cataloging or saying goodbye to MARC. Our users also need our expertise in describing materials that the library owns, borrows, leases and might potential lease, own or rent. This is one aspect to metadata services that is complex. It means advocating for change while also keeping the same services levels that we’ve kept with other workflows. This is not always easy or wanted because unfortunately it does mean extra work and inevitably training and skill building.

This week, I read a fantastic post by Sally Gore at “A Librarian by Any Other Name” called “Follow the leader”. She refers to an article, “Convincing Employees to Use New Technology” by Didier Bonnet at the Harvard Business Review. Briefly the article provides tips on how to go beyond technology implementation and how to successfully have technology adoption. According to the article, one of the primary reasons for not adopting a technology is that the effort was in the technology’s implementation and not its adoption. Further, new technology tend not to affect a change in how business is done creating a noted conflict from the start. To help adoption and change business practices in our digital working environment, Bonnet provides the following tips: do fewer things better, plan and budget for adoption, lead by example, engage true believers, engage HR and organizational people sooner and better, align rewards and recognition.

Sally Gore focuses her post on leading by example. She explains that many still say no to new technology. I would go further and say that many still say no to change that makes one feel vulnerable in the workplace. Her post has some great examples but finishes with the idea that in library land, our leaders need to be examples.

We need this same kind of leadership in libraries, in the Academy, and in other areas of science. Those of us who see and/or have experienced the value of implementing new technologies into our work need to be fairly tireless in banging the can for them. We need to continue to lead by example and hopefully, in time, we will all reap the rewards.

This is a great post and the original article is worth reading. I agree with Sally that our leaders need to be examples for not just new technologies but changes that help us meet the changing needs of users. However, it is not always the case that our leaders are ready to lead by example. It is important to incorporate the change and new technologies into your own workflow. In other words, everyone needs to lead by example.


Filed under: cataloging Tagged: change, Harvard Business Review, new technologies

by Jen at September 21, 2014 08:10 PM

September 20, 2014

Resource Description & Access (RDA)

Articles, Books, Presentations on Resource Description and Access (RDA)




Resource Description and Access (RDA) ➨ Articles, Books, Presentations, Thesis, Videos

Awareness, Perceptions, and Expectations of Academic Librarians in Turkey about Resource Description and Access (RDA)

D Atılgan, N Özel, T Çakmak - Cataloging & Classification Quarterly, 2014
Resource Description and Access (RDA), as a new cataloging standard, supports libraries in
their bibliographic description processes by increasing access points. The increasing
importance of RDA implementation requires adaptation to a new bibliographic universe. ...

China's Road to RDA

C Luo, D Zhao, D Qi - Cataloging & Classification Quarterly, 2014
... 9 (2012): 3–5. 8 Jiang Hualin, “Analysis on the Use Method and the Function of Resource
Description and Access Toolkit,” Library Development (Harbin) no. 9 (2012): 24–26. ... 10 Tang
Caixia, “Resources Analysis of RDA Toolkit,” Library Journal (Shanghai) 32, no. ...

RDA in Spanish: Translation Issues and Training Implications

A Garcia - Cataloging & Classification Quarterly, 2014
... Quarterly 49, nos. 7/8 (2011): 572. 17 Library of Congress, “Resource Description
and Access (RDA): Information and Resources in Preparation for RDA,”
http://www.loc.gov/aba/rda/ (accessed January 7, 2014). 18 Library of ...

Are Philippine Librarians Ready for Resource Description and Access (RDA)? The Mindanao Experience

AP Acedera - Cataloging & Classification Quarterly, 2014
This study aimed to find out the level of readiness of Mindanao librarians to use Resource
Description and Access (RDA), which has been prescribed and adopted by the Philippine
Professional Regulatory Board for Librarians (PRBFL). The majority of librarians are ...

The Adoption of RDA in the German-Speaking Countries

R Behrens, C Frodl, R Polak-Bennemann - Cataloging & Classification Quarterly, 2014
... A decision with such far-reaching implications as the adoption of a new code for the cataloging
of resources represents a great challenge for all concerned. ... In the changeover to the international
Resource Description and Access (RDA) standard described here, the ...

RDA: National Library Board Singapore's Learning Journey

K Choi, HM Yusof, F Ibrahim - Cataloging & Classification Quarterly, 2014
... took on the national effort to execute the new principles for description and access. ... 2) NLB's RDA
policies, standards, and decisions; (3) RDA documentation and resources; (4) communication ...
ranged from the availability and access to RDA documents and resource materials to ...

RDA in Israel

M Goldsmith, E Adler - Cataloging & Classification Quarterly, 2014
... As mentioned above, the Israeli practice of cataloging non-Roman script resources in the
vernacular necessitates translating various terms into Hebrew, Arabic, and Russian. ... 10 Library
of Congress, “Testing Resource Description and Access (RDA).” http://www.loc.gov ...

In the Company of My Peers:

Implementation of RDA in Canada

E Cross, S Andrews, T Grover, C Oliver, P Riva - Cataloging & Classification Quarterly, 2014
... countries of RDA: Resource Description and Access, the new standard that supersedes the
Anglo-American Cataloguing Rules, 2nd Edition, Revised (AACR2). ... Canadian libraries took
advantage both of Canadian resources and those made available by other countries. ...

Implementing RDA in a Time of Change: RDA and System Migration at RMIT University

M Parent - Cataloging & Classification Quarterly, 2014
... in greater detail, and to provide training in cataloging translations and resources in multiple ... of
description, how to handle transcription elements (transcribe as found on resource OR according
to ... policy at the outset of training considered type of RDA description (descriptive) and ...

Catalogue 2.0: The future of the library catalogue

D Sullivan - The Australian Library Journal, 2014
... RDA and serials cataloguing is a technical manual for those who need to catalogue serials using
RDA (Resource Description and Access) and as ... of the MARC 21 formats, but not necessarily
as they apply to the cataloguing of serials and on-going integrating resources'. …

Acceptance and Viewpoint of Iranian Catalogers Regarding RDA: The Case of the National Library and Archive of Iran

F Pazooki, MH Zeinolabedini, S Arastoopoor - Cataloging & Classification Quarterly, 2014
... 12 Chris Stanton, “Resource Description & Access (RDA) Update from the National Library”
(2012), http://nznuccataloguing.pbworks.com ... 13 Farshid Danesh, Mina Afshar, “RDA: A New
Standard for Digital Resources Access,” 2008, http://de.scientificcommons.org/23204029. ...

RDA Implementation Issues in the Iranian National Bibliography: An Analysis of Bibliographic Records

F Pazooki, MH Zeinolabedini, S Arastoopoor - Cataloging & Classification Quarterly, 2014
... Therefore, Resource Description and Access (RDA) was formed. RDA makes description of, and
access to, print and electronic resources possible based on the Functional Requirements of
Bibliographic Records (FRBR) conceptual model and accordingly, not only will it affect ...

[PDF] Using RDA to Catalog ETDs

J Milligan - 2014
... Page 14. PHYSICAL DESCRIPTION FIXED FIELD, MARC 007 ... publication, release, or issuing
of a resource. Consider all online resources to be published. ... “For all other records make a Mode
of Access note only if the resource is accessed by means other than the World Wide ...

The RDA Workbook: Learning the Basics of Resource Description and Access

KE McCormick - Technical Services Quarterly, 2014
Chapter 2 covers preparing bibliographic records using RDA. This core portion of the book
covers the major RDA changes seen in MARC records. It covers the new publication field,
the 33x fields, relationship designators, etcetera. The exercises in this chapter provide the ...

MAxwell's Handbook for RDA: Examining and Illustrating RDA: Resource Description and AccessUsing MARC 21

PH Lisius - Technical Services Quarterly, 2014
Maxwell's book jumps around the larger sections of RDA. For example,''Describing
Manifestations and Items''(Chapter 2) follows the analogous Section 1 of RDA,''Recording
Attributes of Manifestation & Item.''Maxwell's Chapters 3 through 5 respectively cover the ...

Making the Move to RDA: A Self-Study Primer for Catalogers

LM McFall - Technical Services Quarterly, 2014
... Resource Description and Access, or RDA, has been a hot topic in the world of
cataloging for nearly 10 years. Since its official adoption in 2013, several books have
been written about how to implement and use RDA. The latest ...

See also: RDA Bibliography : a sub-blog of RDA Blog

by Salman Haider (noreply@blogger.com) at September 20, 2014 06:12 PM

September 19, 2014

Open Metadata Registry Blog

Activity feeds fixed

Part of our server move was prompted by the ever-increasing indexing of our many, many activity feeds.

In addition to the overall history feed on the front page, each vocabulary has a history feed:
Metadata Registry Change History for Element Set: ‘MARC 21 Elements 00X’

Each element or concept also has its own history feed:
Metadata Registry Change History for Element: ‘Biography of Books’ in Element Set: ‘MARC 21 Elements 00X’

Each individual statement has a history feed (yes we are indeed as crazy as you may suspect we are at this point):
Metadata Registry Change History for Property: ‘label’ of Element: ‘Biography of Books’ in Element Set: ‘MARC 21 Elements 00X’

We believe that it’s important for vocabulary management systems to track changes (what, who, when) right down to the statement level and while we have no idea if there’s any utility to that fine-grained a history feed, we do track it so it seemed reasonable to offer a feed, even at that level, in RSS1 (rdf), RSS2, and atom no less.

This means we have thousands of feeds and these are being parsed heavily by various search engines, much more so than the resource changes they represent, and this generates a lot of bandwidth and processing overhead. We’re working on a number of different solutions, but the first one we tried — implementing app-level managed caching — didn’t do the trick. and we spent quite a bit of time fiddling with different configuration options. And in tinkering with that we broke the feeds in a way that was very hard to track down, since they passed our integration tests, worked on our staging server, and then broke on the production server… sometimes.

So they’re fixed now, and we’re exploring other caching options.

by Jon Phipps at September 19, 2014 08:54 PM

TSLL TechScans

Draft LC-PCC Policy Statement on facsimiles and reproductions available for comment

The Program for Cooperative Cataloging (PCC) has issued a draft LC-PCC Policy Statement which outlines proposed exceptions to the RDA instructions regarding facsimiles and reproductions. RDA currently instructs catalogers to describe a facsimile or reproduction by "record[ing] the data relating to the facsimile or reproduction in the appropriate element. Record any data relating to the original manifestation as an element of a related work or related manifestation, as applicable." The draft Policy Statement proposes deviating from this instruction by recording certain elements as they apply to the original resource, and using the MARC 533 field to record certain other elements as they pertain to the reproduction, mirroring LC's practice under AACR2 chapter 11. In addition, the draft Policy Statement sets forth guidelines on a provider-neutral approach to cataloging print-on-demand materials and photocopies.

The PCC is soliciting feedback on the proposed Policy Statement until September 26. To read the draft, go to the PCC's homepage (http://www.loc.gov/aba/pcc/) and look under "What's New."

by noreply@blogger.com (Jean Pajerek) at September 19, 2014 06:24 PM

September 18, 2014

Open Metadata Registry Blog

Password reset fixed

One of the side effects of our server move was the need to reconfigure our locally-hosted transactional email services, which were always a little flaky — if you have tried to reset your password lately you will no doubt have noticed that the promised email never arrived.

That’s fixed now. We  switched from self-hosted to using Postmark, which works very well and should be far more stable.

by Jon Phipps at September 18, 2014 09:50 PM

Bibliographic Wilderness

Umlaut 4.0 beta

Umlaut is an open source specific-item discovery layer, often used on top of SFX, and based on Rails.

Umlaut 4.0.0.beta2 is out! (Yeah, don’t ask about beta1 :) ).

This release is mostly back-end upgrades, including:

  • Support for Rails 4.x (Rails 3.2 included to make migration easier for existing installations, but recommend starting with Rails 4.1 in new apps)
  • Based on Bootstrap 3 (Rails 3 was Bootstrap 2)
  • internationalization/localization support
  • A more streamlined installation process with a custom installer

Anyone interested in beta testing? Probably most interesting if you have an SFX to point it at, but you can take it for a spin either way.

To install a new Umlaut app, see: https://github.com/team-umlaut/umlaut/wiki/Installation


Filed under: General

by jrochkind at September 18, 2014 06:39 PM

Catalogue & Index Blog

cilipcig

Members’ attention is drawn to the consultation into the future governance of RDA recently announced by the Committee of Principals.

CoP is soliciting responses from all stakeholders

Read the discussion document or see the presentation given by Simon Edwards, Chair of the Committee of Principals, to the IFLA Satellite Meeting RDA: Resource Description and Access – Status and perspectives 2014, Frankfurt, 13th August,

Submitting a response to the consultation document

Stakeholders are asked to submit a response to this paper via email direct to the Chair of CoP. Send responses to:  simon.edwards@cilip.org.uk  with the email subject field “RDA Governance Review Consultation.”  Submissions should be limited to 1000 words.

Respondents are asked respond to the following questions and should bear in mind the principles already agreed by CoP (see section 3 of the background paper).

1. How do you think the Governance Structure could be improved?
2. How could structures be developed to facilitate requests for changes to the standard to be submitted and the views of stakeholders to be represented?
3. Are there any existing structures that could be built upon?
4. Are you aware of any other governance models for this kind of activity which you think we should be aware of/investigate?

All responses must be received by 31st December 2014.


by cilipcig at September 18, 2014 07:13 AM

Free Moth :: Flutterings

A Silly Parade with Circus Clowns

When I wrote the tune A Silly Parade in May 2013 I had an internal image of clowns and people walking along in some sort of parade. The piece was probably the first piece I created in Pro Tools using manual MIDI note input linked to instrument plugins.

I thought I’d try my hand at creating another music video using Live Movie Maker this time using some stock footage rather than stuff I’d shot myself.  I found a few likely candidates in the Prelinger Archive. This archive was founded in 1983 by Rick Prelinger in New York City and contains thousands of historical films.

I decided to use three circus related films from the 1940’s:

The result is called, not surprisngly, A Silly Parade.

A Silly Parade

It’s about three and a half minutes long. Hope you find it interesting …


by freemoth at September 18, 2014 01:28 AM

September 17, 2014

Celeripedean

Jen

In my last post, I ended with the question of how to get the resources you need to create and deliver metadata services. I should say right now that I’m not sure really how to do this effectively. One issues is that metadata services is a very vague subject. What are metadata services? And then who delivers those services? There’s also the question of change and how change is accepted in your institution.

One of the reasons I put out a survey on metadata practices in digital scholarship was to find out what these practices were. Further I wanted to see it people had any suggestions or things that they tried at their institution that helped create and deliver such services. The second reason was that I was interested in seeing what is meant by a “public” metadata service. I wrote the survey around the time of the ALCTS e-forum on public service and cataloging. The majority of participants told stories of how they work at a public “desk” in the library and how their cataloging skills helped them find information faster and more efficiently for the patron.  With metadata services, I believe there is a public component. You might not sit at a desk for a couple of hours per day. But you are on call to answer questions from staff or patrons and then also you set up appointments for longer consultations. For myself, these public reference and consultation services concern almost exclusively digital initiatives (research data, digital humanities, and our digital repository).

Let me return to resources because I wanted to know how people responded to needs of metadata (creating, enhancing, transforming between various standards, best practices, etc.) in digital scholarship with a small staff and few resources for training. There was one respondent who described a pilot project at their institution. The respondent explained that the pilot project was a new attempt to provide services to those who work with research data. Essentially this pilot brought together several staff across units: metadata librarian, subject specialists, programers, digital librarians. Their goal was to draw on each others strengths to create a toolkit for faculty who produce research data (most likely in the sciences). This toolkit included tips and ways to get help from this team in the library. What I found interesting was that to have the staff resources necessary, this institution brought together a team of people who work in different areas. To help with training and time, this team helped each other learn together as they developed this toolkit for faculty.

This is something that my institution has had to do as well. No one person can handle all of these services together. This includes metadata services. With research data, often the researcher has the best handle on their data or at least you hope so. They might need guidance in terms of making it more accessible to a certain user base or more consistent for re-use for visualizations for example. However, they are the data owners not you. This allows the metadata specialists to work with a fellow data expert. When you add a subject specialists, you add another specialist who understands the researcher, their work and their “framework”. In some cases, the subject specialists has or participates in the researcher’s type of project. What is more is that the subject specialist also understands the library’s goal of helping the researcher with whatever they need help with (perhaps a data management plan, a digital humanities project, creating a timeline, submitting research to a repository, …). When you add a digital initiatives or even a programmer, then there are more people to share ideas and move forward. Instead of one person or one unit trying to respond to needs, you have a group working across silos.

With the survey, I would say that a majority of respondents worked across silos in their institution. I would venture to say that even the largest and well funded institutions don’t have all the resources they would like. To offer services, it is necessary to have a group effort. That way staff rely, learn, teach, and help each other.


Filed under: cataloging Tagged: resources

by Jen at September 17, 2014 07:59 PM

September 16, 2014

Catalogue & Index Blog

cilipcig

We are pleased to announce our recently launched annual bursary. Full details can be found below. Best of luck to all applicants!

Purpose: The CILIP Cataloguing & Indexing Group intends to support research, best practice, and professional development in the field of metadata, cataloguing, classification, indexing, and the technology for these areas with an annual bursary of up to £500. CIG also wants to help disseminate the outcomes of any sponsored projects or activities.

Conditions: This bursary is intended for future or ongoing projects (i.e. not awarding past achievements) where no other funds are available. It is available to CIG members. Candidates are expected to report on their results/findings/output etc. to the CIG committee; these reports are to be published in the group’s journal, members’ newsletter, and/or blog. If the report or results are otherwise published, the support from CIG Annual Bursary should be acknowledged. The bursary is not intended for primary professional training (e.g. library school fees). Depending on the suitability of applications, the bursary may be split or not awarded.

Application: The bursary will be announced at least four weeks in advance of its deadline which shall be 31st October; the candidates will be informed of the outcome within another four weeks. Applications should be submitted to the CIG Chair/Secretary and should include:

  • A covering letter of application
  • Details of how the bursary will be spent i.e.
    • a description of the aims & objectives of the project or activity
    • how it will contribute to the professional development of individuals or generally to CIG’s field of interest (not more than 500 words)

A supporting statement from anyone in the wider library profession is optional.

Decision: The panel of judges for the CIG Annual Bursary will be comprised of three persons:

  • CIG Chair or CIG committee member nominated by the Chair
  • Professional academic, invited by the committee
  • Professional practitioner or other expert in the field, invited by the committee

Payment: Payment will be made to the successful applicant(s) by cheque or electronic bank transfer, at a time determined by the judging panel. The panel may impose conditions, such as proof of expenses, that have to be fulfilled before the full sum is paid out.

 


by cilipcig at September 16, 2014 09:38 AM

September 15, 2014

Metadata Matters (Diane Hillmann)

Who ya gonna call?

Some of you have probably noted that we’ve been somewhat quiet recently, but as usual, it doesn’t mean nothing is going on, more that we’ve been too busy to come up for air to talk about it.

A few of you might have noticed a tweet from the PBCore folks on a conversation we had with them recently. There’s a fuller note on their blog, with links to other posts describing what they’ve been thinking about as they move forward on upgrading the vocabularies they already have in the OMR.

Shortly after that, a post from Bernard Vatant of the Linked Open Vocabularies project (LOV) came over the W3C discussion list for Linked Open Data. Bernard is a hero to those of us toiling in this vineyard, and LOV one of the go-to places for those interested in what’s available in the vocabulary world and the relationships between those vocabularies. Bernard was criticizing the recent release of the DBpedia Ontology, having seen the announcement and, as is his habit, going in to try and add the new ontology to LOV. His gripes fell into a couple of important categories:

* the ontology namespace was dereferenceable, but what he found there was basically useless (his word)
* finding the ontology content itself required making a path via the documentation at another site to get to the goods
* the content was available as an archive that needed to be opened to get to the RDF
* there was no versioning available, thus no way to determine when and where changes were made

I was pretty stunned to see that a big important ontology was released in that way–so was Bernard apparently, although since that release there has apparently been a meeting of the minds, and the DBpedia Ontology is now resident in LOV. But as I read the post and its critique my mind harkened back to the conversation with PBCore. The issues Bernard brought up were exactly the ones we were discussing with them–how to manage a vocabulary, what tools were available to distribute the vocabulary to ensure easy re-use and understanding, the importance of versioning, providing documentation, etc.

These were all issues we’d been working hard on for RDA, and are still working on behind the RDA Registry. Clearly, there are a lot of folks out there looking for help figuring out how to provide useful access to their vocabularies and to maintain them properly. We’re exploring how we might do similar work for others (so ask us!).

Oh, and if you’re interested on our take on vocabulary versioning, take a look at our recent paper on the subject, presented at the IFLA satellite meeting on LOD in Paris last month.

I plan on posting more about that paper and its ideas later this week.

by Diane Hillmann at September 15, 2014 07:31 PM

025.431: The Dewey blog

Anthropological Linguistics, Ethnolinguistics, Sociolinguistics of Specific Languages

Sociolinguistics and its closely related neighbors, ethnolinguistics and anthropological linguistics, are often studied in the context of specific languages.  (“Sociolinguistics” will be used throughout to represent any and/or all of sociolinguistics, ethnolinguistics, and anthropological linguistics.)  Until recently, however, no provision was given in the DDC for expressing a specific language in the context of sociolinguistics et al.  An expansion at 306.442 Anthropological linguistics, ethnolinguistics, sociolinguistics of a specific language now provides for notation from Table 6 Languages to be added to 306.442 to collocate language-specific sociolinguistics.  

Consider, for example, Language ideologies and the globalization of standard Spanish (to which the LCSH Sociolinguistics has been assigned).  With the expansion at 306.442, the number for this work becomes 306.44261 Sociolinguistics of Spanish (built with 306.442, plus notation T6—61 Spanish, as instructed at 306.442).  (Because it wasn't possible previously to add Table 6 notation to 306.44 Language ["Class here sociolinguistics"], classifiers sometimes desperately and cleverly tried to fill the need by adding standard subdivision notation T1—09 Geographic treatment, plus notation T2—175 Regions where specific languages predominate, as instructed under T1—091 Areas, regions, places in general, plus the appropriate notation from Table 6, as instructed under T2—175.  Such classification was an end run around a missing add instruction:  geographic treatment in a region where a language predominates is not the same thing as the language itself.)

Sometimes classification of a sociolinguistics work wants to express both language and place.  Because this happens fairly often, the add instruction under 306.442 provides for the classifier to use 0 as a facet indicator between the notation for language and Table 2 notation instead of using standard subdivision T1—09:  "Add to base number 306.442 notation T6—2-9 from Table 6, e.g., sociolinguistics of French 306.44241; then add 0 and to the result add notation T2—1- 9 from Table 2, e.g.,  sociolinguistics of French in Quebec 306.442410714."   This is the approach that we take with a work like Sociolinguistics and Nigerian English, which should now be classed in 306.442210669 Sociolinguistics of English in Nigeria (built with 306.442, plus notation T6—21 English, plus facet indicator 0, plus notation T2—669 Nigeria, all as instructed under 306.442).

A work on the sociolinguistics of a place, where the work is not language-specific, uses 306.44 Language plus standard subdivision T1—09 Geographic treatment plus Table 2 notation, as in the past.  For example, The languages of urban Africa, to which the LCSH Sociolinguistics—Africa has been assigned,should continue to be classed in 306.44096091732 Sociolinguistics of African urban regions (built with 306.44, plus notation T1—09 Geographic treatment, plus notation T2—6 Africa, as instructed under T1—093-099 Specific continents, countries, localities, plus notation T1—093-099:091 Areas, regions, places in general from the add table under T1—093-099, plus notation T2—1732 Urban regions, as instructed under T1—093-099:091).

by Rebecca at September 15, 2014 01:44 PM

September 12, 2014

Resource Description & Access (RDA)

DATE OF PUBLICATION NOT IDENTIFIED IN THE RESOURCE - RDA EXAMPLES

DATE OF PUBLICATION NOT IDENTIFIED IN THE RESOURCE
CASERDA / LC PCC-PS EXAMPLE
approximate date[2014?]
supplied date[2014]
date of publication not identified[date of publication not identified]
two years[2013 or 2014]
betwen years[between 2005 and 2014?]
not before[not before 2000]
not after[not after 2000]
betwen years with date[between March 13, 2000 and July 10, 2014]

<<<<<=====>>>>>

Note: Cells left blank for Catalogers to fill with data. Please supply cases and solutions quoting proper RDA rules where date of publication not identified. It will be included in this table along with name of the cataloger. Write your suggestions in the "comments" section of this blog post.

<<<<<=====>>>>>

DATE OF PUBLICATION NOT IDENTIFIED IN THE RESOURCE - LC-PCC PS FOR 2.8.6.6

Your Browser doen't support for Iframe !

by Salman Haider (noreply@blogger.com) at September 12, 2014 11:13 PM

Temporary / Permanent Date in an Incomplete Multipart Monograph : Questions and Answers in the Google+ Community "RDA Cataloging"

RDA Cataloging is an online community/group/forum for library and information science students, professionals and cataloging & metadata librarians. It is a place where people can get together to share ideas, trade tips and tricks, share resources, get the latest news, and learn about Resource Description and Access (RDA), a new cataloging standard to replace AACR2, and other issues related to cataloging and metadata.

 Questions and Answers in the Google+ Community "RDA Cataloging"

<<<<====>>>>

Publication etc., dates (MARC21 264). These conventions do not apply to serials or integrating resources (temporary data not recorded in this field).

Temporary date. If a portion of a date is temporary, enclose the portion in angle brackets.

EXAMPLE

, 1980-〈1981〉 
v. 1-2 held; v. 2 published in 1981

, 〈1981-〉 
v. 2 held; v. 1-2 published in 1981

, 〈1979〉-1981. 
v. 2-3 held of a 3-volume set

, 〈1978-1980〉 
v. 2-3 held of a 5-volume set

Permanent date. If an entire date is judged to be permanent, record it without angle brackets.

EXAMPLE

, 1980-
not 
〈1980-〉 or, 1980-〈 〉 
v. 1 held; v. 1 published in 1980

[Source: LC-PCC PS for RDA Rule 1.7.1]

by Salman Haider (noreply@blogger.com) at September 12, 2014 11:13 PM

OCLC Cataloging and Metadata News

Five Canadian research libraries collaborate to share cataloguing of large collections

Joseph Hafner is Associate Dean, Collection Services at McGill University. During the 2014 WorldShare Metadata Users Group Meeting at ALA Annual in Las Vegas, he shared how OCLC’s WorldShare Metadata and WorldCat knowledge base, enabled five Canadian research libraries—University of Alberta, University of British Columbia, McGill University, Université de Montréal and University of Toronto—to catalogue nearly 7,000 records over just a few months. Many of these records include those from the National Film Board and Québec government publications.

September 12, 2014 04:00 PM

September 11, 2014

Mod Librarian

5 Things Thursday: DAM, Special Special Collections, RDA

Here are 5 more things of interest:

  1. A lovely British take on why organisations need a digital asset management department. Including that “the role of this department being about education rather than just a place to dump all the cataloguing jobs that DAM users don’t fancy taking on themselves…”
  2. Incredibly special Special Collections libraries complete with an archive of punk.
  3. How to design…

View On WordPress

September 11, 2014 12:07 PM

Coyle's InFormation

Philosophical Musings: The Work

We can't deny the idea of work - opera, oeuvre - as a cultural product, a meaningful bit of human-created stuff. The concept exists, the word exists. I question, however that we will ever have, or that we should ever have, precision in how works are bounded; that we'll ever be able to say clearly that the film version of Pride and Prejudice is or is not the same work as the book. I'm not even sure that we can say that the text of Pride and Prejudice is a single work. Is it the same work when read today that it was when first published? Is it the same work each time that one re-reads it? The reading experience varies based on so many different factors - the cultural context of the reader; the person's understanding of the author's language; the age and life experience of the reader.

The notion of work encompasses all of the complications of human communication and its consequent meaning. The work is a mystery, a range of possibilities and of possible disappointments. It has emotional and, at its best, transformational value. It exists in time and in space. Time is the more canny element here because it means that works intersect our lives and live on in our memories, yet as such they are but mere ghosts of themselves.

Take a book, say, Moby Dick; hundreds of pages, hundreds of thousands of words. We read each word, but we do not remember the words -- we remember the book as inner thoughts that we had while reading. Those could be sights and smells, feelings of fear, love, excitement, disgust. The words, external, and the thoughts, internal, are transformations of each other; from the author's ideas to words, and from the words to the reader's thoughts. How much is lost or gained during this process is unknown. All that we do know is that, for some people at least, the experience is vivid one. The story takes on some meaning in the mind of the reader, if one can even invoke the vague concept of mind without torpedoing the argument altogether.

Brain scientists work to find the place in the maze of neuronic connections that can register the idea of "red" or "cold" while outside of the laboratory we subject that same organ to the White Whale, or the Prince of Denmark, or the ever elusive Molly Bloom. We task that organ to taste Proust's madeleine; to feel the rage of Ahab's loss; to become a neighbor in one of Borges' villages. If what scientists know about thought is likened to a simple plastic ping-pong ball, plain, round, regular, white, then a work is akin to a rainforest of diversity and discovery, never fully mastered, almost unrecognizable from one moment to the next.

As we move from textual works to musical ones, or on to the visual arts, the transformation from the work to the experience of the work becomes even more mysterious. Who hasn't passed quickly by an unappealing painting hanging on the wall of a museum before which stands another person rapt with attention. If the painting doesn't speak to us, then we have no possible way of understanding what it is saying to someone else.

Libraries are struggling to define the work as an abstract but well-bounded, nameable thing within the mass of the resources of the library. But a definition of work would have to be as rich and complex as the work itself. It would have to include the unknown and unknowable effect that the work will have on those who encounter it; who transform it into their own thoughts and experiences. This is obviously impractical. It would also be unbelievably arrogant (as well as impossible) for libraries to claim to have some concrete measure of "workness" for now and for all time. One has to be reductionist to the point of absurdity to claim to define the boundaries between one work and another, unless they are so far apart in their meaning that there could be no shared messages or ideas or cultural markers between them. You would have to have a way to quantify all of the thoughts and impressions and meanings therein and show that they are not the same, when "same" is a target that moves with every second that passes, every synapse that is fired.

Does this mean that we should not try to surface workness for our users? Hardly. It means that it is too complex and too rich to be given a one-dimensional existence within the current library system. This is, indeed, one of the great challenges that libraries present to their users: a universe of knowledge organized by a single principle as if that is the beginning and end of the story. If the library universe and the library user's universe find few or no points of connection, then communication between them fails. At best, like the user of a badly designed computer interface, if any communication will take place it is the user who must adapt. This in itself should be taken the evidence of superior intelligence on the part of the user as compared to the inflexibility of the mechanistic library system.

Those of us in knowledge organization are obsessed with neatness, although few as much as the man who nearly single-handled defined our profession in the late 19th century; the man who kept diaries in which he entered the menu of every meal he ate; whose wedding vows included a mutual promise never to waste a minute; the man enthralled with the idea that every library be ordered by the simple mathematical concept of the decimal.

To give Dewey due credit, he did realize that his Decimal Classification had to bend reality to practicality. As the editions grew, choices had to be made on where to locate particular concepts in relation to others, and in early editions, as the Decimal Classification was used in more libraries and as subject experts weighed in, topics were relocated after sometimes heated debate. He was not seeking a platonic ideal or even a bibliographic ideal; his goal was closer to the late 19th century concept of efficiency. It was a place for everything, and everything in its place, for the least time and money.

Dewey's constraints of an analog catalog, physical books on physical shelves, and a classification and index printed in book form forced the limited solution of just one place in the universe of knowledge for each book. Such a solution can hardly be expected to do justice to the complexity of the Works on those shelves. Today we have available to us technology that can analyze complex patterns, can find connections in datasets that are of a size way beyond human scale for analysis, and can provide visualizations of the findings.

Now that we have the technological means, we should give up the idea that there is an immutable thing that is the work for every creative expression. The solution then is to see work as a piece of information about a resource, a quality, and to allow a resource to be described with as many qualities of work as might be useful. Any resource can have the quality of the work as basic content, a story, a theme. It can be a work of fiction, a triumphal work, a romantic work. It can be always or sometimes part of a larger work, it can complement a work, or refute it. It can represent the philosophical thoughts of someone, or a scientific discovery. In FRBR, the work has authorship and intellectual content. That is precisely what I have described here. But what I have described is not based on a single set of rules, but is an open-ended description that can grow and change as time changes the emotional and informational context as the work is experienced.

I write this because we risk the petrification of the library if we embrace what I have heard called the "FRBR fundamentalist" view. In that view, there is only one definition of work (and of each other FRBR entity). Such a choice might have been necessary 50 or even 30 years ago. It definitely would have been necessary in Dewey's time. Today we can allow ourselves greater flexibility because the technology exists that can give us different views of the same data. Using the same data elements we can present as many interpretations of Work as we find useful. As we have seen recently with analyses of audio-visual materials, we cannot define work for non-book materials identically to that of books or other texts. [1] [2] Some types of materials, such as works of art, defy any separation between the abstraction and the item. Just where the line will fall between Work and everything else, as well as between Works themselves, is not something that we can pre-determine. Actually, we can, I suppose, and some would like to "make that so", but I defy such thinkers to explain just how such an uncreative approach will further new knowledge.

[1] Kara Van Malssen. BIBFRAME A-V modeling study
[2] Kelley McGrath. FRBR and Moving Images

by Karen Coyle (noreply@blogger.com) at September 11, 2014 05:30 AM

September 10, 2014

Bibliographic Wilderness

Cleaning up the Rails backtrace cleaner; Or, The Engine Stays in the Picture!

Rails has for a while included a BacktraceCleaner that removes some lines from backtraces, and reformats others to be more readable.

(There’s an ActiveSupport::BacktraceCleaner, although the one in your app by default is actually a subclass of that, which sets some defaults, Rails::BacktraceCleaner. That’s a somewhat odd way to implement Rails defaults on an AS::BacktraceCleaner, but oh well).

This is pretty crucial, especially since recent versions of Rails can have pretty HUGE call stacks, due to reliance on Rack middleware and other architectural choices.

I rely on clean stack traces in the standard Rails dev-mode error page, in my log files of fatal uncaught exceptions — but also in some log files I write myself, where I catch and recover from an exception, but want to log where it came from anyway, ideally with a clean stacktrace. `Rails.backtrace_cleaner.clean( exception.backtrace )`

A few problems I had with it though:

  • Several of my apps are based on kind of ‘one big Rails engine’. (Blacklight, Umlaut).  The default cleaner will strip out any lines that aren’t part of the local app, but I really want to leave the ‘main engine’ lines in. That was my main motivation to look into this, but as long as I was at it, a couple other inconveniences…
  • The default cleaner nicely reformats lines from gems to remove the filepath to the gem dir, and replace with just the name of the gem. But this didn’t seem to work for gems listed in Bundler as :path (or, I think, :github ?), that don’t live in the standard gem repo. And that ‘main engine gem’ would often be checked out thus, especially in development.
  • Stack trace lines that come from ERB templates include a dynamically generated internal method name, which is really long and makes the stack trace confusing — the line number in the ERB file is really all we need. (At first I thought the Rails ‘render template pattern filter’ was meant to deal with that, but I think it’s meant for something else)

Fortunately, you can remove and add/or your own silencers (which remove lines from the stack trace), and filters (which reformat stack trace lines) from the ActiveSupport/Rails::BacktraceCleaner.

Here’s what I’ve done to make it the way I want. I wanted to add it directly built into Umlaut (a Rails Engine), so this is written to go in Umlaut’s `< Rails::Engine` class. But you could do something similar in a local app, probably in the `initializers/backtrace_silencers.rb` file that Rails has left as a stub for you already.

Note that all filters are executed before silencers, so your silencer has to be prepared to recognize already-filtered input.

module Umlaut
  class Engine < Rails::Engine
    engine_name "umlaut"

    #...

    initializer "#{engine_name}.backtrace_cleaner" do |app|
      engine_root_regex = Regexp.escape (self.root.to_s + File::SEPARATOR)

      # Clean those ERB lines, we don't need the internal autogenerated
      # ERB method, what we do need (line number in ERB file) is already there
      Rails.backtrace_cleaner.add_filter do |line|
        line.sub /(\.erb:\d+)\:in `__.*$/, "\\1"
      end

      # Remove our own engine's path prefix, even if it's
      # being used from a local path rather than the gem directory.
      Rails.backtrace_cleaner.add_filter do |line|
        line.sub(/^#{engine_root_regex}/, "#{engine_name} ")
      end

      # Keep Umlaut's own stacktrace in the backtrace -- we have to remove Rails
      # silencers and re-add them how we want.
      Rails.backtrace_cleaner.remove_silencers!

      # Silence what Rails silenced, UNLESS it looks like
      # it's from Umlaut engine
      Rails.backtrace_cleaner.add_silencer do |line|
        (line !~ Rails::BacktraceCleaner::APP_DIRS_PATTERN) &&
        (line !~ /^#{engine_root_regex}/  ) &&
        (line !~ /^#{engine_name} /)
      end
    end

    #...
  end
end

 

 


Filed under: General

by jrochkind at September 10, 2014 04:40 PM

September 09, 2014

025.431: The Dewey blog

Dr. Lois Mai Chan

We were shocked and saddened to learn of the passing of Dr. Lois Mai Chan on August 20.

Lois was a member of the Decimal Classification Editorial Policy Committee (EPC) 1975-1993, and she served as chair of EPC 1986-1991.  Before her last EPC meeting, Peter Paulson (then executive director of Forest Press) wrote: "As most of you already know, this will be the last EPC meeting for Lois Chan, who has served the Committee wisely and well for eighteen years.  We shall miss her careful and thoughtful reading of exhibits, her ability to clarify ambiguities, and her soothing composure when opinions conflict.  We will be honoring Lois at the Committee dinner on Wednesday evening."

Lois was co-author of Dewey Decimal Classification: A Practical Guide (1994, 1996) and Dewey Decimal Classification: Principles and Application (2003).  

Lois was part of the teams that introduced Dewey at workshops held in various locations around the world:  in 1995 at Crimea '95 and in Moscow at the Russian National Public Library for Science and Technology, at IFLA in Beijing in 1996, and in several venues in Hanoi in 1997. The proceedings of the Beijing workshop were published as Dewey Decimal Classification: Edition 21 and International Perspectives: Papers from a Workshop presented at the General Conference of the International Federation of Library Associations and Institutions (IFLA), Beijing, China, August 29, 1996. Lois gave the opening remarks, the summary and closing remarks, and was co-editor of the publication—doing what was typical for her.  

Her deep understanding of the principles of the DDC, and her consistently careful, hard work to communicate that understanding to students around the world made a lasting contribution to the development and teaching of the DDC.

by Juli at September 09, 2014 08:37 PM

First Thus

ACAT Improving catalogues

On 09/09/2014 14.11, Scott R Piepenburg wrote:
> In one situation I was at, it was highly desirable for people to be able to say “I want Tom Hanks movies where he was an actor as opposed to a director” or I want the works of Jim Steinman as a composer versus a performer.” Maybe it is an isolated situation, but in this particular location, it was highly useful to be able to find people who serve multiple roles (actor, producer, director, etc.) to the exclusion of others. The challenge is first to code the information, then to make a user-friendly and useful interface to be able to retrieve it.
Yes, I agree with this. People have asked these kinds of questions for years and they have gone to the tools that provide them. For movies, there was the serial “Film directors : a complete guide” and other directories. For music, there have been “Who’s who in music” and “Musician’s directory” among others. Plus there was the occasional almanac or two.

Today however, for movies, you can go to the IMDB, among other places. Here is Tom Hank’s page. http://www.imdb.com/name/nm0000158/ I have found Google itself to be pretty good too. For music, I am not quite so sure if there is one “best” but there is allmusic.com. Here is the page for Jim Steinman http://www.allmusic.com/artist/jim-steinman-mn0000852399, and of course, we should not forget about Wikipedia for any of this.

This is so simple, and *free* (amazingly!) that the next question seems obvious enough: why should we spend our precious resources re-creating things that already exist? It would take a long, long, long time (if every!) before anything we do could be nearly as good as these tools are right now–and by that time (after a few decades) what will exist for the public then?! The mind boggles at what there could be in only five years, much less 20 or 30. The very first Iphone was in 2007 and look at everything that has happened since then.

So, what should we do? Do we spend our time adding these relator codes to our records, when we know that anything we make will remain demonstrably inferior to those other tools? Why? Or do we try to work with these other tools in all kinds of cool ways? Again, that is a task primarily for the computer technicians, and lots of them are making these sorts of tools right now. This would leave catalogers to work on what they do best.

The information that is in our records right now could be used far better than it is. That is where our focus should be.

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at September 09, 2014 02:28 PM

Improving catalogues

Posting to Autocat

On 08/09/2014 23.59, Charles Pennell wrote:

The introduction of Bibframe doesn’t change anything about legacy data, just as AACR didn’t change anything about pre-existing ALA rules records or AACR2 to pre-existing AACR1 records. …
In your example, $e can be added retrospectively when role information has been provided through the 245$c, 508 or 511, but again, this takes effort and needs to be subjected to a business model. We have been doing a lot of clean-up of our legacy e-book records, standardizing 856 notes, eliminating duplicate records, separating print from e-, etc. using global edits plus a lot of grunt work for which we have community support in the interest of providing better access. It can be done when you have a plan and support.

Sure, we can do all kinds of things, but in a world of declining numbers of catalogers who are spending less and less time actually cataloging (from everything I have heard and read), we should be asking ourselves: what is the best use of our diminishing resources? Is it best spent by adding information to older records that have been around for decades or more, or doing something else? Where is the evidence that the public wants and needs this relator information so much more than other things we could do? While I have heard many–too many–complaints about our catalogs, I have certainly never seen or heard anything from a user that says that it is critical for them to search for people as “editors” or “thesis advisors” or something like that. And this is at the same time as the authority records do not work.

Which is more important:
1) to find specific authors reliably by their function as editors, “creators” (whatever that catch-all word means) and so on http://www.loc.gov/marc/relators/relaterm.html, or

2) to find out that if you want to do a good search for Mark Twain, then you need to know the following :

“For works of this author written under other names, search also under Clemens, Samuel Langhorne, 1835-1910, Snodgrass, Quintus Curtius, 1835-1910 Conte, Louis de, 1835-1910, Alden, Jean Francois, 1835-1910″

along with the associated links? http://lccn.loc.gov/n79021164 How is anybody supposed to just “know” that?

To do 1) demands a huge amount of resources and time from the entire cataloging community. A product that could give even halfway-decent results (such as finding a specific person in the role of an editor, say 50% of the time?) will take what? 5 years? 10 years? Shouldn’t we find out how long it would take and what resources would be needed to get it done in–say 20 years vs. 10 years vs. 5 years? More important, shouldn’t we at least find out if this is so important to the public–because librarians themselves do not need that information to manage the collection.

To do 2) it is entirely different. Everything exists now, catalogers need do nothing, and it remains only that some programmers/computer technicians build it, and share it. I emphasize “only” because that is a loaded word here, but in any case, it demands exponentially less resources from far fewer people than option 1). Anyway, I would argue that it needs to be done in any case if our authority records are ever to become useful to the public again.

So, what is the wiser choice? Of course, I may be wrong and it could turn out that research would show the public would vastly prefer the relator information to the information in the authority files, or prefer the relator information to having catalogers actually catalog more resources–which could help everybody deal with the problem of “information overload” that is causing everybody to pull their hair out. Shouldn’t we at least find out? To find out, the public needs to be researched in semi-scientific ways. Of course, that is one part of “making the business case”.

Nobody can avoid making a business case for what they do. Coming up with a valid business case is a complicated task and you may hear and discover things you don’t like one bit, but it is nonetheless inevitable. You can either make the case before implementation of a project, so that you can avoid as many problems and errors as possible, or you have to do it afterwards when the problems and errors are clear to everyone, and you find yourself trying to explain them away.

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at September 09, 2014 11:04 AM

Bibliographic Wilderness

Cardo is a really nice free webfont

Some of the fonts on google web fonts aren’t that great. And I’m not that good at picking the good ones from the not-so-good ones on first glance either.

Cardo is a really nice old-style serif font that I originally found recommended on some list of “the best of google fonts”.

It’s got a pretty good character repertoire for latin text (and I think Greek). The Google Fonts version doesn’t seem to include Hebrew, even though some other versions might?  For library applications, the more characters the better, and it should have enough to deal stylishly with whatever letters and diacritics you throw at it in latin/germanic languages, and all the usual symbols (currency, punctuation; etc).

I’ve used it in a project that my eyeballs have spent a lot of time looking at (not quite done yet), and been increasingly pleased by it, it’s nice to look at and to read, especially on a ‘retina’ display. (I wouldn’t use it for headlines though)


Filed under: Uncategorized

by jrochkind at September 09, 2014 04:39 AM

September 08, 2014

First Thus

ACAT Improving catalogues

Posting to Autocat

On 9/8/2014 8:37 PM, Charles Pennell wrote:

If anything, I think Bibframe is on target to create even greater granularity for our data than MARC (including MARCXML) ever could have. Plus, it has the attention of those who are trying to provide better access to that data through the semantic Web and are looking for a more sympathetic data structure and vocabulary than what is currently offered through non-libraryviders.

This is one of those points I have never understood. Right now, our tools are powerful enough (either with mySQL or with XML) we can manipulate any (and I mean ANY) information in our records in any way we could want (and this means ANY WAY). Changing our formats will not change this at all. For instance, if we want to enable people to do the FRBR user tasks: to find/identify/select/obtain works/expressions/manifestations/items by their authors/title/subjects, that can be done RIGHT NOW. By anybody. This is not because of any changes in our cataloging rules or formats, but because the “innards” of the catalog have been changed with Lucene indexing that now allows anybody to use the facets (created automatically through the new indexing) to do that. To prove it to yourself, all you have to do is search WorldCat for any uniform title, e.g. Dante’s “Divine Comedy”. With this search, anybody can then click on different formats, different dates, languages and so on. These facets and the user interface can be changed in any way we want. Any uniform title can be used.

Yes, all of this can be improved a lot, that user interface can really be improved, but catalogers need do nothing. These changes can be made only by the computer technicians. Catalogers just need to add all the uniform titles, just as they have always done.

To make records more granular means to add information that is currently not in our records. What does this mean? For instance, we could add that a specific person was a translator instead of just an added entry. In a new record, we make today, we can add the correct $e relator code. That is pretty easy. But what about in this record http://www.worldcat.org/oclc/7480551 where Henry Francis Cary was the translator? Who is going to add it to that record? It won’t add itself! Or to the other expressions/manifestations of his translation? Or to all of the other translators to all of the other versions? Or to all of the other translations of all works? How many records and how much work would that be? What about all of the relator codes, including the WEMI?

Wow!

This is what I was getting at in my podcast Cataloging Matters No. 16: Catalogs, Consistency and the Future

If we don’t add the coding to these older records, then they will effectively be hidden when someone clicks on someone as translator, i.e. only the new headings will get:

Cary, Henry Francis, |d 1772-1844, |e translator

and the old ones will not. That is, until someone recodes them. To make this search useful means that every single record will have to be recataloged, otherwise when people search for Cary as a translator those records cannot, by definition, be found.

How is this different from other databases that IT people work with? From my experience, business people (where this happens regularly) deal with it by saying, “Well, we are dealing with old, obsolete information, so we can just archive it. We can put it in a zip file and let people download it if they want it.” I have heard precisely those words.

Maybe this is correct when talking about invoice information that is 5+ years old–or maybe not–or personnel information, or even for medical information that is 15 years old or older. But it is 100% totally incorrect for library catalog information. Why?

Because the materials received, and the records made, for materials received 50 or 100 years ago may be among the most important and valuable materials in your library. Remember, we are talking about everything made before just 2 or 3 years ago. That is quite a bit. If you make those records a lot harder to find, you automatically make the materials they describe harder to find. And as a result, the collection itself is less useful. Therefore, the information in a library catalog is fundamentally different from the information in most other databases.

Library catalogs have always been based on the rule of consistency, and I still have seen nothing at all that replaces that. For instance, linked data is still based on putting in links consistently. If the information in the records is inconsistent (and adding relator information etc. only to the new records is a perfect example of that), that makes it at least a whole lot harder to find the earlier records–and therefore, the materials they describe.

Perhaps it works in some databases better than others, but absolutely not in a library catalog. If we change our new records, we must change our old ones or people won’t find them (or at least it will be a lot harder and more confusing). We either care about that or we don’t. If we care, this means massive retrospective conversions, and in our dwindling cataloging departments, we must confess that that means it will never be done. That is a simple fact.

As Mac has said: our records as they are right now could be much more useful to people than they are, but that is a task for the IT people to change the catalog “innards”. One part would be to make the authority records actually useful again.

Now, to return to Bibframe etc. Sure, we can and should change our format (should have been done at least 15 years ago), but that has much more to do with not being stuck in traditional ILMSs, and being able to use cheaper, more powerful tools (e.g. Lucene uses MARCXML) and making them more available for non-library uses.

But that is another topic.

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at September 08, 2014 09:15 PM

ACAT Authorities and references

Posting to Autocat

On 08/09/2014 15.40, Brian Briscoe wrote:

How do we catalogers get invited to meet with ILS developers so that we can work together to create the kind of tools that will really serve the needs of our users?

and

On 08/09/2014 15.48, Scott R Piepenburg wrote:

As Brian writes ” How do we catalogers get invited to meet with ILS developers so that we can work together to create the kind of tools that will really serve the needs of our users?” Easy! Write the checks.

Yes, and therein lies the real difficulty. I like to think that I can speak both languages (catalog-speak and tech-speak) and I have succeeded in interesting IT developers. But that is not nearly enough because it is their supervisors who have to be interested! Plus, it is not the catalogers who have control of the money to pay them; catalogers (or somebody) must first convince the library administrators to cut loose of the funds. THEN, in the case of a normal ILS, someone else (neither the catalogers nor the library administrators) must convince the ILS company administrators to work on it. Achieving all of that is anything but easy.

That is why I have placed at least some hopes on the open-source software movement. If someone wants an open-source catalog to work in a certain way, you can pay the money and it will be done. Or–if you can get a developer interested and it is not too much work, someone might actually do it on their own time. But as I said, it’s tough–even with the best of intentions–when everybody has too much to do already and are working flat out.

I have found that demonstrating the power of authorities, which seems so obvious to me, is very abstract for others to grasp. I suspect that most people alive today have probably never experienced it while others have forgotten. Plus, the new ways are popular. When I have tried to demonstrate how authority control could work for us today, I have been reduced to showing how it worked in cards and book catalogs, which makes me look like the biggest Luddite that walked the face of the earth. It turns people off immediately and I know it. To get some possible movement, I think there needs to be a small prototype so that people could see how it might help them.

At least building a prototype is possible, now that the LC authorities are available for download and manipulation. So, it could be done–it is conceivable. But the first step is to get people (i.e. catalogers) to see that the catalog really is broken; that is has been broken for a long time, and that is too bad–but it really and truly can be fixed. It really can.

Unfortunately, the cataloging community is currently focused on rule changes, format changes, and striving for the universe of FRBR and linked data, as if that will make the real difference. That takes tremendous resources, time, money and so on from shrinking departments. I don’t know if anything will come from the cataloging community. At least not anytime soon.

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at September 08, 2014 02:31 PM