Planet Cataloging

December 21, 2014

Resource Description & Access (RDA)

Questions and Answers on treatment Series in RDA and MARC21


  • Join the Google+ Community RDA Cataloging to view, ask, and share information and issues related to Resource Description & Access (RDA) and AACR2 Cataloging

by Salman Haider (noreply@blogger.com) at December 21, 2014 03:51 AM

December 20, 2014

Terry's Worklog

Working with SPARQL in MarcEdit

Over the past couple of weeks, I’ve been working on expanding the linking services that MarcEdit can work with in order to create identifiers for controlled terms and headings.  One of the services that I’ve been experimenting with is NLM’s beta SPARQL endpoint for MESH headings.  MESH has always been something that is a bit foreign to me.  While I had been a cataloger in my past, my primary area of expertise was with geographic materials (analog and digital), as well as traditional monographic data.  While MESH looks like LCSH, it’s quite different as well.  So, I’ve been spending some time trying to learn a little more about it, while working on a process to consistently query the endpoint to retrieve the identifier for a preferred Term. Its been a process that’s been enlightening, but also one that has led me to think about how I might create a process that could be used beyond this simple use-case, and potentially provide MarcEdit with an RDF engine that could be utilized down the road to make it easier to query, create, and update graphs.

Since MarcEdit is written in .NET, this meant looking to see what components currently exist that provide the type of RDF functionality that I may be needing down the road.  Fortunately, a number of components exist, the one I’m utilizing in MarcEdit is dotnetrdf (https://bitbucket.org/dotnetrdf/dotnetrdf/wiki/browse/).  The component provides a robust set of functionality that supports everything I want to do now, and should want to do later.

With a tool kit found, I spent some time integrating it into MarcEdit, which is never a small task.  However, the outcome will be a couple of new features to start testing out the toolkit and start providing users with the ability to become more familiar with a key semantic web technology,  SPARQL.  The first new feature will be the integration of MESH as a known vocabulary that will now be queried and controlled when run through the linked data tool.  The second new feature is a SPARQL Browser.  The idea here is to give folks a tool to explore SPARQL endpoints and retrieve the data in different formats.  The proof of concept supports XML, RDFXML, HTML. CSV, Turtle, NTriple, and JSON as output formats.  This means that users can query any SPARQL endpoint and retrieve data back.  In the current proof of concept, I haven’t added the ability to save the output – but I likely will prior to releasing the Christmas MarcEdit update.

Proof of Concept

While this is still somewhat conceptual, the current SPARQL Browser looks like the following:

image

At present, the Browser assumes that data resides at a remote endpoint, but I’ll likely include the ability to load local RDF, JSON, or Turtle data and provide the ability to query that data as a local endpoint.  Anyway, right now, the Browser takes a URL to the SPARQL Endpoint, and then the query.  The user can then select the format that the result set should be outputted.

Using NLM as an example, say a user wanted to query for the specific term: Congenital Abnormalities – utilizing the current proof of concept, the user would enter the following data:

SPARQL Endpoint: http://id.nlm.nih.gov/mesh/sparql

SPARQL Query:

PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX meshv: <http://id.nlm.nih.gov/mesh/vocab#>
PREFIX mesh: <http://id.nlm.nih.gov/mesh/>

SELECT distinct ?d ?dLabel 
FROM <http://id.nlm.nih.gov/mesh2014>
WHERE {
  ?d meshv:preferredConcept ?q .
  ?q rdfs:label 'Congenital Abnormalities' . 
  ?d rdfs:label ?dLabel . 
} 
ORDER BY ?dLabel 

Running this query within the SPARQL Browser produces a resultset that is formatted internally into a Graph for output purposes.

image

image

image

The images snapshot a couple of the different output formats.  For example, the full JSON output is the following:

{
  "head": {
    "vars": [
      "d",
      "dLabel"
    ]
  },
  "results": {
    "bindings": [
      {
        "d": {
          "type": "uri",
          "value": "http://id.nlm.nih.gov/mesh/D000013"
        },
        "dLabel": {
          "type": "literal",
          "value": "Congenital Abnormalities"
        }
      }
    ]
  }
}

The idea behind creating this as a general purpose tool, is that in theory, this should work for any SPARQL endpoint.   For example, the Project Gutenberg Metadata endpoint.  The same type of exploration can be done, utilizing the Browser.

image

Future Work

At this point, the SPARQL Browser represents a proof of concept tool, but one that I will make available as part of the MARCNext research toolset:

image

As part of the next update.  Going forward, I will likely refine the Browser based on feedback, but more importantly, start looking at how the new RDF toolkit might allow for the development of dynamic form generation for editing RDF/BibFrame data…at least somewhere down the road.

–TR

[1] SPARQL (W3C): http://www.w3.org/TR/rdf-sparql-query/
[2] SPARQL (Wikipedia): http://en.wikipedia.org/wiki/SPARQL
[3] SPARQL Endpoints: http://www.w3.org/wiki/SparqlEndpoints
[4] MarcEdit: http://marcedit.reeset.net
[5] MARCNext: http://blog.reeset.net/archives/1359

by reeset at December 20, 2014 06:06 AM

Resource Description & Access (RDA)

Correct Coding of ISBN in MARC21 field 020 in RDA & AACR2 Cataloging with Examples



Several years ago the definition of $z of the 020 (International Standard Book Identifier) was expanded—it is now used for “structurally invalid” ISBNs (those that are too short, too long, wrong check digit, etc.) and also for “application invalid” ISBNs (ISBNs for a manifestation that would be described in a different bibliographic record).

The LC-PCC Policy Statement for 2.15.1.7 provides the following instruction:  
Record ISBNs in $z (Canceled/invalid) of MARC field 020 if they clearly represent a different manifestation from the resource being cataloged and would require a separate record (e.g., an ISBN for the large print version, e-book, or teacher’s manual on the record for a regular trade publication). If separate records would not be made (e.g., most cases where ISBNs are given for both the hardback and paperback simultaneously), or in cases of doubt, record the ISBNs in $a (International Standard Book Number) of MARC field 020

Please remember to use $z for ISBNs when appropriate. For regular print publications, this is most likely to occur when you also have an ISBN for a large print edition or e-book that would be cataloged on a separate record.

When we do not use the correct subfield code in field 020, systems that receive records from LC may incorrectly merge or replace records for the wrong format—we have received several complaints about this, and we hope we can improve the situation with your help.

[Source: Dave Reser, Library of Congress, Policy and Standards Division] 


<<<<<---------->>>>>


Question: Record ISBNs in 020 $z if they represent a different manifestation from the resource being cataloged.

  • If a printed monograph presents different ISBNs for different manifestation, do we have to transcribe them like below given example?


AACR2            020 $a 9780415692847 (hardback: alk. paper)
                                        020 $a 9780203116852 (e-book)

RDA                020 $a 9780415692847 (hardback: alk. paper)
                                      020 $z 9780203116852 (e-book)   (recorded in $z ISBN clearly representing an e-book version of the same manifestation)

Answer: Yes, the example you have given shows LC’s practice documented in LC PCC PS 2.15.1.7 for multiple ISBN:

“…if they clearly represent a different manifestation from the resource being cataloged and would require a separate record (e.g., an ISBN for the large print version, e-book, or teacher’s manual on the record for a regular trade publication). If separate records would not be made (e.g., most cases where ISBNs are given for both the hardback and paperback simultaneously), or in cases of doubt, record the ISBNs in $a”

<<<<<---------->>>>>


See Also: RDA Blog Labels (Categories) in below links for posts on related information on treatment of ISBN in RDA.

by Salman Haider (noreply@blogger.com) at December 20, 2014 01:08 AM

December 19, 2014

TSLL TechScans

Expensing e-books: how much should patron habit influence collection development?

This article by Terrance L. Cottress and Brigitte Bell explores the difficulties in managing print and ebook expenditures in today's libraries.

http://dx.doi.org/10.1108/BL-09-2014-0023


by noreply@blogger.com (Marlene Bubrick) at December 19, 2014 07:39 PM

Metadata Matters (Diane Hillmann)

The Jane-athon Prototype in Hawaii

The planning for the Midwinter Jane-athon pre-conference has been taking up a lot of my attention lately. It’s a really cool idea (credit to Deborah Fritz) to address the desire we’ve been hearing for some time for a participatory, hands on, session on RDA. And lets be clear, we’re not talking about the RDA instructions–this is about the RDA data model, vocabularies, and RDA’s availability for linked data. We’ll be using RIMMF (RDA in Many Metadata Formats) as our visualization and data creation tool, setting up small teams with leaders who’ve been prepared to support the teams and a wandering phalanx of coaches to give help on the fly.

Part of the planning has to do with building a set of RIMMF ‘records’ to start with, for participants to add on their own resources and explore the rich relationships in RDA. We’re calling these ‘r-balls’ (a cross between RIMMF and tarballs). These zipped-up r-balls will be available for others to use for their own homegrown sessions, along with instructions for using RIMMF and setting up a Jane-athon (or other themed -athon), and also how to contribute their own r-balls for the use of others. In case you’ve not picked it up, this is a radically different training model, and we’d like to make it possible for others to play, too.

That’s the plan for the morning. After lunch we’ll take a look at what we’ve done, and prise out the issues we’ve encountered, and others we know about. The hope is that the participants will walk out the door with both an understanding of what RDA is (more than the instructions) and how it fits into the emerging linked data world.

I recently returned from a trip to Honolulu, where I did a prototype Jane-athon workshop for the Hawaii Library Association. I have to admit that I didn’t give much thought to how difficult it would be to do solo, but I did have the presence of mind to give the organizer of the workshop some preliminary setup instructions (based on what we’ll be doing in Chicago) to ensure that there would be access to laptops with software and records pre-loaded, and a small cadre of folks who had been working with RIMMF to help out with data creation on the day.

The original plan included a day before the workshop with a general presentation on linked data and some smaller meetings with administrators and others in specialized areas. It’s a format I’ve used before and the smaller meetings after the presentation generally bring out questions that are unlikely to be asked in a larger group.

What I didn’t plan for was that I wouldn’t be able to get out of Ithaca on the appointed day (the day before the presentation) thanks not to bad weather, but instead to a non-functioning plane which couldn’t be repaired. So after a phone discussion with Hawaii, I tried again the next day, and everything went smoothly. On the receiving end there was lots of effort expended to make it all work in the time available, with some meetings dribbling into the next day. But we did it, thanks to organizer Nancy Sack’s prodigious skills and the flexibility of all concerned.

Nancy asked the Jane-athon participants to fill out an evaluation, and sent me the anonymized results. I really appreciated that the respondents added many useful (and frank) comments to the usual range of questions. Those comments in particular were very helpful to me, and were passed on to the other MW Jane-athon organizers. One of the goals of the workshop was to help participants visualize, using RIMMF, how familiar MARC records could be automatically mapped into the FRBR structure of RDA, and how that process might begin to address concerns about future workflow and reuse of MARC records. Another goal was to illustrate how RDA’s relationships enhanced the value of the data, particularly for users. For the most part, it looked as if most of the participants understood the goals of the workshop and felt they had gotten value from it.

But there were those who provided frank criticism of the workshop goals and organization (as well as the presenter, of course!). Part of these criticisms involved the limitations of the workshop, wanting more information on how they could put their new knowledge to work, right now. The clearest expression of this desire came in as follows:

“I sort of expected to be given the whole road map for how to take a set of data and use LOD to make it available to users via the web. In rereading the flyer I see that this was not something the presenter wanted to cover. But I think it was apparent in the afternoon discussion that we wanted more information in the big picture … I feel like I have an understanding of what LOD is, but I have no idea how to use it in a meaningful way.”

Aside from the time constraints–which everyone understood–there’s a problem inherent in the fact that very few active LOD projects have moved beyond publishing their data (a good thing, no doubt about it) to using the data published by others. So it wasn’t so much that I didn’t ‘want’ to present more about the ‘bigger picture’, there wasn’t really anything to say aside from the fact that the answer to that question is still unclear (and I probably wasn’t all that clear about it either). If I had a ‘road map’ to talk about and point them to, I certainly would have shared it, but sadly I have nothing to share at this stage.

But I continue to believe that just as progress in this realm is iterative, it is hugely important that we not wait for the final answers before we talk about the issues. Our learning needs to be iterative too, to move along the path from the abstract to the concrete along with the technical developments. So for MidWinter, we’ll need to be crystal clear about what we’re doing (and why), as well as why there are blank areas in the road-map.

Thanks again to the Hawaii participants, and especially Nancy Sack, for their efforts to make the workshop happen, and the questions and comments that will improve the Jane-athon in Chicago!

For additional information, including a link to register, look here. Although I haven’t seen the latest registration figures, we’re expecting to fill up, so don’t delay!

[these are the workshop slides]

[these are the general presentation slides]

by Diane Hillmann at December 19, 2014 03:22 PM

December 18, 2014

Mod Librarian

5 Things Thursday: SPARQL, RDF, Linked Data, DAM, Taxonomy

Here are 5 interesting things to hold you over until 2015. Mod Librarian will return on January 8th.

  1. Learn SPARQL with this great W3C tutorial. Then check out this SPARQL query editor on the Linked Data Hub.
  2. Full list of Metadata Matters webinars is here.
  3. Marvelous and free DAM webinars from Henry Stewart are here.
  4. SLA Webinar on Using Analytics to Improve Your Taxonomy and User Search Experience.

View On WordPress

December 18, 2014 01:56 PM

December 17, 2014

Terry's Worklog

MarcEdit 6 Update Posted

A new update has been posted.  The changes are noted below:

  • Enhancement: Installation changes – for administrators, the program will now allow for quite installations and an option to prevent users from enabling automated updates.  Some IT admins have been asking for this for a while.  The installation program will take an command-line option: autoupdate=no, to turn this off.  The way this is disabled (since MarcEdit manages individual profiles) is a file will be created into the program directory that if present, will present automatic updates.  This file will be removed (or not recreated) if this command-line isn’t set – so users doing automated installations will need to remember to always set this value if they wish to prevent this option from being enabled.  I’ve also added a not in the Preferences window noting if the administrator has disabled the option.
  • Bug Fix: Swap Field Task List – one of the limiters wasn’t being passed (the process one field per swap limiter)
  • Bug Fix: Edit Field Task List – when editing a control field, the positions text box wasn’t being shown. 
  • Bug Fix: Edit Field Regular Expression options – when editing control fields, the edit field function evaluated the entire field data – not just the items to be edited.  So, if I wanted to use a regular expression to evaluate two specific values, I couldn’t.  This has been corrected.
  • Enhancement: Linked Data Linker – added support for FAST headings. 
  • Bug Fix: Linked Data Linker – when processing data against LC’s id.loc.gov, some of the fuzzy matching was causing values to be missed.  I’ve updated the normalization to correct this.
  • Enhancement: Edit Subfield Data – Moving Field data – an error can occur if the field having data moved to is a control field, and the control field is smaller than the position where the data should be moved to.  An error check has been added to ensure this error doesn’t pop up.
  • Bug Fix: Auto Translation Plug-in – updated code because some data was being dropped on translation, meaning that it wouldn’t show up in the records.

Update can be found at: http://marcedit.reeset.net/downloads or via the automated updating tool.  The plug-in updates can be downloaded via the Plug-in Manager within the MarcEdit Application.

–tr

by reeset at December 17, 2014 06:25 AM

December 14, 2014

First Thus

December 12, 2014

OCLC Cataloging and Metadata News

November 2014 data update now available for the WorldCat knowledge base

The WorldCat knowledge base continues to grow with new providers and collections added monthly.  The details for November updates are now available in the full release notes

December 12, 2014 08:45 PM

December 11, 2014

First Thus

RDA-L Re: RE: Re. 2.12.9.2

Posting to RDA-L

On 12/12/2014 6:58 PM, J. McRee Elrod wrote:

F or me, our decisions should be based on patron service, at least as much as rule exegesis. It is certainly in patrons’ best interest that all items in a series have the series title and numbers, even if not within one or more items in the series. Omitting this data due to a lacuna in RDA is simply not acceptable.

Patron service should be paramount, even if it requires anticipating a needed RDA or PS change. Rules should never be an excuse to omit helpful data.

I agree with this of course, but when adding series information (or any other information) from an obscure source, whether it is buried inside the item, or perhaps from a series listing in completely different item, or somewhere on the web, please remember your hard-working cataloger colleagues–who are looking at your records more closely than anyone else–and put a note stating where you found that information.

They will thank you silently. Otherwise, they can waste inordinate amounts of time trying to find the information that has dropped from nowhere, and may wind up thinking they have something else, and turn a simple record into a problem.

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at December 11, 2014 09:52 PM

Mod Librarian

5 Things Thursday: Taxonomy, DAM and Archives

Here are five more things:

  1. Using DAM Content Migration to Maximize Asset Value
  2. The Accidental Taxonomist on Taxonomy Courses
  3. U.C. Berkeley set to pull plug on anarchist’s archive
  4. Eira Tansey’s notes on the 2014 Society of American Archivists meeting
  5. Taxonomy folksonomy cookbook presentation by Daniela Barbosa raises some interesting points

View On WordPress

December 11, 2014 01:03 PM

December 09, 2014

Coyle's InFormation

Classes in RDF

RDF allows one to define class relationships for things and concepts. The RDFS1.1 primer describes classes succinctly as:
Resources may be divided into groups called classes. The members of a class are known as instances of the class. Classes are themselves resources. They are often identified by IRIs and may be described using RDF properties. The rdf:type property may be used to state that a resource is an instance of a class.
This seems simple, but it is in fact one of the primary areas of confusion about RDF.

If you are not a programmer, you probably think of classes in terms of taxonomies -- genus, species, sub-species, etc. If you are a librarian you might think of classes in terms of classification, like Library of Congress or the Dewey Decimal System. In these, the class defines certain characteristics of the members of the class. Thus, with two classes, Pets and Veterinary science, you can have:
Pets
- dogs
- cats

Veterinary science
- dogs
- cats
In each of those, dogs and cats have different meaning because the class provides a context: either as pets, or information about them as treated in veterinary science.

For those familiar with XML, it has similar functionality because it makes use of nesting of data elements. In XML you can create something like this:
<drink>
    <lemonade>
        <price>$2.50</price>
        <amount>20</amount>
    </lemonade>
    <pop>
        <price>$1.50</price>
        <amount>10</amount>
    </pop>
</drink>
and it is clear which price goes with which type of drink, and that the bits directly under the <drink> level are all drinks, because that's what <drink> tells you.

Now you have to forget all of this in order to understand RDF, because RDF classes do not work like this at all. In RDF, the "classness" is not expressed hierarchically, with a class defining the elements that are subordinate to it. Instead it works in the opposite way: the descriptive elements in RDF (called "properties") are the ones that define the class of the thing being described. Properties carry the class information through a characteristic called the "domain" of the property. The domain of the property is a class, and when you use that property to describe something, you are saying that the "something" is an instance of that class. It's like building the taxonomy from the bottom up.

This only makes sense through examples. Here are a few:
1. "has child" is of domain "Parent".

If I say "X - has child - 'Fred'" then I have also said that X is a Parent because every thing that has a child is a Parent.

2. "has Worktitle" is of domain "Work"

If I say "Y - has Worktitle - 'Der Zauberberg'" then I have also said that Y is a Work because every thing that has a Worktitle is a Work.

In essence, X or Y is an identifier for something that is of unknown characteristics until it is described. What you say about X or Y is what defines it, and the classes put it in context. This may seem odd, but if you think of it in terms of descriptive metadata, your metadata describes the "thing in hand"; the "thing in hand" doesn't describe your metadata. 

Like in real life, any "thing" can have more than one context and therefore more than one class. X, the Parent, can also be an Employee (in the context of her work), a Driver (to the Department of Motor Vehicles), a Patient (to her doctor's office). The same identified entity can be an instance of any number of classes.
"has child" has domain "Parent"
"has licence" has domain "Driver"
"has doctor" has domain "Patient"

X - has child - "Fred"  = X is a Parent 
X - has license - "234566"  = X is a Driver
X - has doctor - URI:765876 = X is a Patient
Classes are defined in your RDF vocabulary, as as the domains of properties. The above statements require an application to look at the definition of the property in the vocabulary to determine whether it has a domain, and then to treat the subject, X, as an instance of the class described as the domain of the property. There is another way to provide the class as context in RDF - you can declare it explicitly in your instance data, rather than, or in addition to, having the class characteristics inherent in your descriptive properties when you create your metadata. The term used for this, based on the RDF standard, is "type," in that you are assigning a type to the "thing." For example, you could say:
X - is type - Parent
X - has child - "Fred"
This can be the same class as you would discern from the properties, or it could be an additional class. It is often used to simplify the programming needs of those working in RDF because it means the program does not have to query the vocabulary to determine the class of X. You see this, for example, in BIBFRAME data. The second line in this example gives two classes for this entity:
<http://bibframe.org/resources/FkP1398705387/8929207instance22>
a bf:Instance, bf:Monograph .

One thing that classes do not do, however, is to prevent your "thing" from being assigned the "wrong class." You can, however, define your vocabulary to make "wrong classes" apparent. To do this you define certain classes as disjoint, for example a class of "dead" would logically be disjoint from a class of "alive." Disjoint means that the same thing cannot be of both classes, either through the direct declaration of "type" or through the assignment of properties. Let's do an example:
"residence" has domain "Alive"
"cemetery plot location" has domain "Dead"
"Alive" is disjoint "Dead" (you can't be both alive and dead)

X - is type - "Alive"                                         (X is of class "Alive")
X - cemetery plot location - URI:9494747      (X is of class "Dead")
Nothing stops you from creating this contradiction, but some applications that try to use the data will be stumped because you've created something that, in RDF-speak, is logically inconsistent. What happens next is determined by how your application has been programmed to deal with such things. In some cases, the inconsistency will mean that you cannot fulfill the task the application was attempting. If you reach a decision point where "if Alive do A, if Dead do B" then your application may be stumped and unable to go on.

All of this is to be kept in mind for the next blog post, which talks about the effect of class definitions on bibliographic data in RDF.


Note: Multiple domains are treated in RDF as an AND (an intersection). Using a library-ish example, let's assume you want to define a note field that you can use for any of your bibliographic entities. For this example, we'll define entities Work, Person, Manifestation for ex:note. You define your note property something like:

ex:note
    a rdf:Property ;
    rdfs:label "Note"@en ;
    rdfs:domain ex:Work ;
    rdfs:domain ex:Person ;
    rdfs:domain ex:Manifestation ;
    rdfs:range rdfs:literal .

Any subject on which you use ex:note would be inferred to be, at the same time, Work and Person and Manifestation - which is manifestly illogical. There is not a way to express the rule: "This property CAN be used with these classes" in RDF. For that, you will need something that does not yet exist in RDF, but is being worked on in the W3C community, which is a set of rules that would allow you to validate property usage. You might also want see what schema.org has done for domain and range.

by Karen Coyle (noreply@blogger.com) at December 09, 2014 06:31 AM

December 08, 2014

First Thus

ACAT Misspelling in contents Was: RE: Philosophical Tenants?

Posting to Autocat

Ian Fairclough wrote:

Philosophical Tenants and the 21st Century is the wording of the title of a chapter in the book Global business and corporate governace : ǂb environment, structure, and challenges / ǂc John Thanopoulos.
You may wonder whether this is an actual typo or the intended phrase! I leave it to the reader to judge. My suspicion is that “tenets” was intended.

After examining the book at https://www.safaribooksonline.com/library/view/global-business-and/9781606498644/, it seems to be a fairly serious work, so it is difficult to imagine that it is an intended typo.

Therefore, I figure this is why we get paid the big money! :-)

Would it help the public to add “Philosophical tenets”? Yes–without a doubt.

Although Cutter’s spelling of his “Alfabetic-order table” http://babel.hathitrust.org/cgi/pt?id=mdp.39015046432038;view=1up;seq=2 was certainly intended, because he was following the Spelling Reform movement (as Dewey and many others did), I would not hesitate today to make a “246 Alphabetic-order table” since I cannot imagine anybody ever searching for “alfabetic” today.

Catalogers should help people find information, not prevent it.

FacebookTwitterGoogle+PinterestShare

by James Weinheimer at December 08, 2014 09:26 PM

025.431: The Dewey blog

Up in the Air with 3D Printing

Stories about new 3D printing applications are becoming more and more commonplace.  But one recent article felt somehow different.  It  didn't address a different application of 3D printing so much as 3D printing in a different context—space:

The International Space Station’s 3-D printer has manufactured the first 3-D printed object in space, paving the way to future long-term space expeditions.

"This first print is the initial step toward providing an on-demand machine shop capability away from Earth," said Niki Werkheiser, project manager for the International Space Station 3-D Printer at NASA's Marshall Space Flight Center in Huntsville, Alabama. "The space station is the only laboratory where we can fully test this technology in space."

And what was that first 3D printed object?  A faceplate of the extruder's casing, that is, a part for itself.  What makes this (symbolically) a big deal is that the one 3D printer now in operation at the International Space Station, in addition to making replacement parts for itself, could ultimately also produce other 3D printers.  But that's not all.  As the CEO of one of the companies that collaborated on this project commented,

"The operation of the 3-D printer is a transformative moment in space development.  We’ve built a machine that will provide us with research data needed to develop future 3-D printers for the International Space Station and beyond, revolutionizing space manufacturing. This may change how we approach getting replacement tools and parts to the space station crew, allowing them to be less reliant on supply missions from Earth."

But for the time being, self-repairing 3D printers and self-duplicating 3D printers are just a "blue sky" idea, and supply missions to the International Space Station are still a necessity.

As noted in a previous posting, the 3D printing literature is developing along two directions, the technology itself and applications of the technology.  And so we ask, where would we class a work on 3D printing in space?  And where would we class a work on the 3D printing of 3D printers?

The number for 3D printing in space is quite straightforward:   621.9880919, built with 621.988 Additive manufacturing equipment (which has three-dimensional printers in its class-here note), plus T1—091 Areas, regions, places in general, plus 9 (from T2—19 Space, following the add instruction at T1—091).  In this example, the base number 621.988 is used to express the technology of 3D printing.

The number for the 3D printing of 3D printers is 621.988028, built with 621.988 Additive manufacturing equipment, plus T1—028 Auxiliary techniques and procedures; apparatus, equipment, materials.  In this example, the base number 621.988 is used to express the application of the technology, following the scatter class-elsewhere note at 621.988:   "Class additive manufacturing applications in a subject with the subject, plus notation T1—028 from Table 1."

And what about the 3D printing of 3D printers in space?  Notation —028 is (much) higher than notation —091 in the preference table for Table 1. Standard subdivisions.  Thus, the number for the 3D printing of 3D printers in space is the same as for the 3D printing of 3D printers, i.e., 621.988028.

by Rebecca at December 08, 2014 09:11 PM