The future of bibliographic control: Data infrastructure

May 3, 2010 by

In January 2008, the Library of Congress Working Group (LCWG) on the Future of Bibliographic Control issued a report with findings on the state of bibliographic control, recommendations for the Library of Congress as well as libraries of all types across the country, and predictions of what might happen if recommended changes do not take place. [1] Recommendations ranged from increasing efficiency of bibliographic record production to positioning cataloging technology – and the cataloging community as a whole – for the future to strengthening the library and information science profession. As someone who works with other library technologies but has no experience with cataloging (other than MLIS coursework), my interest in this topic is drawn primarily to the future of cataloging technology. As I see it, the future of bibliographic control is tied to data infrastructure.

I don’t think I did a satisfactory job of explaining my position in an earlier post; my criticism of RDA is not based on the vocabularies or the not-yet-released text but on the decision to retain the MARC21 standard with the implementation of RDA rules based on the FRBR model. I strongly believe that updating the metadata infrastructure will have benefits in several of the areas discussed in the LCWG report, including sharing bibliographic data, eliminating redundancies in cataloging, and strengthening the LIS community’s place in the information arena. Even before the report’s release two years ago, the cataloging community was issuing calls for a more extensible metadata infrastructure that would permit data sharing both within and outside libraryland. [2] An important outcome (perhaps the most important outcome?) of the LCWG report is the increased discussion of the metadata infrastructure issue among the cataloging community in the literature, the blogosphere, and email listservs. [3, 4]

Reducing redundancy and increasing efficiency by sharing metadata

The first set of recommendations in the LCWG report dealt with eliminating redundancies; this goal has not been accomplished yet but the cataloging community’s discussions about formatting records in RDA to facilitate sharing among entities within and outside libraryland are a start. Among the LCWG recommendations to increase use of bibliographic data available earlier in the supply chain were recommendation 1.1.1.2:

“All: Analyze cataloging standards and modify them as necessary to ensure their ability to support data sharing with publisher and vendor partners;”

and recommendation 1.1.1.5:

“All: Work with publishers and other resource providers to coordinate data sharing in a way that works well for all partners.”

Not to minimize concerns about loss of bibliographic control, but libraries might as well take advantage of the metadata created elsewhere by trusted partners; if partners are selected carefully, the benefits should outweigh the risks. Shared metadata could reduce the number of redundant records and, by distributing the responsibility (or burden, if you wish) of creating metadata among more players, each party might reclaim time, funding, or manpower to put toward other efforts. Of course, these arguments are essentially theoretical until they can be tested. Getting libraries to a point where we can try sharing more data with other entities and information sources will require a shift in attitudes and comfort zones as well as a change in the technology supporting our records.

Positioning our technology for the future

Section 3 of the LCWG report called for the greater cataloging community to “Position Our Technology for the Future.” [1] The first recommendation in this section was to “develop a more flexible, extensible metadata carrier,” including:

“3.1.1.1 LC: Recognizing that Z39.2/MARC are no longer fit for the purpose, work with the library and other interested communities to specify and implement a carrier for bibliographic information that is capable of representing the full range of data of interest to libraries, and of facilitating the exchange of such data both within the library community and with related communities.”

One potential replacement for the MARC standard is RDF/XML. The RDF (resource description framework) data model and XML (eXtensible Markup Language) syntax were in existence well before the release of the LCWG report but are getting more attention from the cataloging community as discussion turns to data management and the Semantic Web. [5] Although there are other languages which might prove to be suitable, XML is well-enough established to be in wide use, including by many of our potential partners in metadata sharing.

XML can be used with established HTML tags for formatting (“markup”) but is infinitely more adaptable (“extensible”) because the user can define his or her own XML tags to be used as “containers” for different types of data. The tags are then used for data manipulation and display by referencing data, using unique identifiers to pull them from Web-accessible databases. Essentially, XML enables computers to read and process data, which is one of the main principles of the Semantic Web. MARC was designed to make metadata readable by machines, too (hence the name Machine Readable Cataloging), but the problem is that no one outside of libraries, publishers, and distributors is using MARC21. XML, on the other hand, is not only machine-readable but also machine-actionable and it isn’t limited to libraries and related industry; it’s used by players in all kinds of fields. What does this have to do with the future of bibliographic control? Packaging our metadata in an arrangement that is flexible, machine-accessible, and, perhaps more importantly, used by others outside of libraries but within the information arena would permit more give-and-take with record creation, hopefully resulting in less duplication of effort and more accurate records (as long as everyone uses the same tags, which was touched on by recommendations 3.1.1.2 and 3.1.3 in the LCWG report and is another discussion unto itself). By letting the machines do the heavy lifting, so to speak, we could then use the data more efficiently and with more confidence. This would have benefits both for the cataloging community and our users.

Go where the patrons are, or: How I learned to stop worrying and love Web 2.0

Library users are demonstrating different search strategies than in the past; now, users often search for bibliographic information from sources like Amazon.com and Google instead of the OPAC. [6] Web-based tools like LibraryThing pull in bibliographic metadata, reviews, and references to the item found online (such as on Wikipedia). Sources of information like Amazon.com and Google are often more intuitive to the user than a typical OPAC, so it’s not surprising that users use what they are comfortable with. Instead of watching from the sidelines, libraries should join in and take advantage of the metadata that’s already available on the Web. The phrase, “Go where your patrons/users/customers are,” is often applied to libraries’ use of Web-based technologies and social media and it is applicable here too.

In addition to importing jacket cover images and professionally-generated reviews from non-library sources, some library OPACs also are satisfying users’ desire to contribute user-generated content like ratings, reviews, and comments. Despite the increase in user-generated content, and the users’ desire to generate said content, libraries want to maintain bibliographic control by not permitting users to edit catalog data. Although maintaining control in this manner is understandable, given the lack of cataloging training held by the majority of users, it seems like libraries could harvest some data from users – with some limitations on what can be edited – with less output of effort than doing all original cataloging and without sacrificing the integrity of data created by trained catalogers. In other words, wouldn’t some help be better than no help? I don’t think the question could be answered adequately without giving it a shot. It is in the best interest of the LIS profession to implement and embrace the Web 2.0 features our patrons want; we can benefit from the give-and-take of metadata from patrons and other sources while keeping ourselves relevant as an online source of information.

Still waiting for the right outcome

The outcome of the LCWG report in these aspects has been more discussion, rather than decisions. In addition to a data container that will work with others inside and outside libraryland, a new data structure would, ideally, provide catalogers with linked access to standard vocabularies and provide for newer forms of metadata like user-generated ratings, reviews, and tags. Developing standards is such an intricate and complex process, though, that it is better to take the time to examine the situation thoroughly and try to get it right the first time rather than rush into a “solution” which does not facilitate desired functions and lacks long-term viability. That was part of the reasoning behind LCWG’s recommendation 3.2.5 to suspend work on RDA – “Assurance that RDA is based on practical realities as well as on theoretical constructs will improve support for the code in the bibliographic control community” (p. 30) – a recommendation which has not been adopted by the Joint Steering Committee for Development of RDA. The retention of MARC21 will have implications on libraries’ ability to incorporate other LCWG recommendations which might be realized sooner with the proper metadata infrastructure.

Notes

  1. Library of Congress Working Group. (2008). Report of the Library of Congress Working Group on the Future of Bibliographic Control. On the Record, January 2008. Retrieved from: http://www.loc.gov/bibliographic-future/news/lcwg-ontherecord-jan08-final.pdf.
  2. See, for example: Coyle, K. & Hillmann, D. (2007). Resource Description and Access (RDA): Cataloging rules for the 20th century. D-Lib Magazine, 13(1/2). Retrieved from: http://www.dlib.org/dlib/january07/coyle/01coyle.html.
  3. Coyle, K. (2010). RDA vocabularies for a twenty-first-century data environment. Library Technology Reports, 46(2). Retrieved from: http://alatechsource.metapress.com/content/k7r37m5m8252/?p=f27fdbe2e2904acfbea08ee4c96e8ad8&pi=1 (links to each of the six chapters/articles available here).
  4. RDA-L (RDA email listserv). Retrieved from: http://www.mail-archive.com/rda-l@listserv.lac-bac.gc.ca/.
  5. See, for example: Coyle, K. (2010). Understanding the Semantic Web: Bibliographic Data and Metadata. Library Technology Reports, 46(1). Retrieved from: http://alatechsource.metapress.com/content/g212v1783607/?p=a596ecbea377451cbc6a72c8e28bb711&pi=2 (links to each of the three chapters/articles available here).
  6. De Rosa, C. et al. (2005). Perceptions of Libraries and Information Resources. Dublin, OH: OCLC Online Computer Library Center. Retrieved from: http://www.oclc.org/reports/pdfs/Percept_all.pdf.

Improving Access to Rare, Unique, and Other Special Hidden Materials- It’s happening…

May 3, 2010 by

There were many great recommendations made in “on the record” that produced a number of outcomes. Some of which have already been touched upon in the previous blogs. The outcome I wanted to focus on is the improvement in access to rare, and unique materials in special collections and archives. I feel that some of these improvements have resulted from LC stating in this report that these materials need to be brought to the light and made accessible. There were a number of great objectives listed for improving access but I will focus on only a few.

2.1.2 Streamline Cataloging for Rare, Unique, and other Special Hidden Materials, Emphasizing Greater Coverage and Broader Access.

In the Library of Congress response to “On the Record” a number of planned actions are listed for this particular objective such as developing and sharing workflows for cataloging that will allow this objective to be carried out in a practical manner. Most importantly it mentions the need for development of technology that will automate metadata production.

In what I feel is a response to these planned actions are a number of presentations at workshops and conferences that have been focused particularly on the cataloging workflow for special collections and rare materials. An example of these is the Hidden Collections Symposium <http://www.clir.org/hiddencollections/symposium20100329.html> held in March 2010. Several of the presentations focused on cataloging workflows for special collections. The presentation given on the “African Set Maps” project at the Library of Congress <http://www.clir.org/hiddencollections/symposium/LibraryofCongress.ppt&gt; really highlights this a push for greater coverage and broader access.  In the presentation it is clear that they are definitely implementing linked data and attempting to share these materials via the web. They have links that go from the historical maps to a Google Earth view that highlights the area as it currently exists. One slide of the presentation in particular gives a great chart of the interaction between the Marc records, the data entry form, the LC Web Portal and Google Earth. I think my biggest take home from this related to something Giwilker said in her post RDA’s “Legacy Approach” “The trouble is that new solutions will have to be built on the old infrastructure. MARC can, and should be, expanded to deal with additional types of information, without reducing its present usefulness.” The map collection project underway at the Library of Congress shows that this is just what can be done by integrating the old infrastructure with new technologies.

2.1.4 Encourage Digitization to Allow Broader Access.

I have been applying for jobs lately, and I have perceived a significant increase in the number of positions for digital librarians. Far more than I had seen when I started looking for jobs in 2008. I recently applied for one at University of San Francisco for which the main duty would be to assess which areas of the libraries holdings should be made available digitally. While I have no direct sources to back me up that this is resulting from “On The Record” I think that it’s not a terrible stretch to think that many departments may have been inspired from it.

2.1.4.1 LC: Study possibilities for computational access to digital content. Use this information in developing new rules and best practices.

At the PLA’s Annual conference I sat through a demonstration for LibLimes product ArchivalWare. I think that this is a good example of products being developed that make use of the computational access to digital content. This product in particular allowed users to search digital content via traditional Boolean searches, Pattern Searches and Concept Searches. The pattern searches allow users to search for things misspelled and sill retrieve the information they were looking for. Similarly if they type something out of order, the pattern search will retrieve the words in the real order they were intended to be in. Add controlled vocabularies and this becomes an expanded and, in my opinion, exciting way to search for material in special collections. Concept searches were its coolest feature. Controlled vocabularies are added to the bibliographic records that expand terms found in the collection. In the demonstration they showed how a user could search for contaminated water, and retrieve material that contained the text polluted water, because it was a related term in the controlled vocabulary. Similarly to the map project being done at the Library of Congress, all this is being done with traditional MARC records in KOHA.

All of these projects I see happening make me think that re-training cataloging staff in RDA would use time better spent training them in newer technologies that can be incorporated into the older infrastructure. I was once told that in order to become a good librarian, I would need to learn programming. I took an introductory class to programming and the University of Washington and to my dismay found that I was no good at it … I thought about my future and my goal of being a cataloging librarian. Would I make a terrible cataloger if I couldn’t program? Will I be able to improve access to rare and unique materials if I can’t understand the new technology being developed? I have no answers to my own internal questions. However, I am pleased to see others coming up with such fantastic solutions to access issues for special materials.

http://www.loc.gov/bibliographic-future/news/lcwg-ontherecord-jan08-final.pdf

http://www.loc.gov/bibliographic-future/news/LCWGResponse-Marcum-Final-061008.pdf

http://www.clir.org/hiddencollections/symposium20100329.html

http://www.clir.org/hiddencollections/symposium/LibraryofCongress.ppt

Questions about the Chaos

May 3, 2010 by

Diane encouraged us to post questions here for Jon about his lecture.

I’m curious about the distinction between the Semantic Web and the Linked Open Data Web. I was just getting my mind around having 2 webs, not 3! I know that linked data aren’t necessarily open and open data aren’t necessarily linked….why doesn’t it matter if thinking machines can’t figure out the linking on this third web?

Also, I was wondering about the appearance of a person’s name as a Spanish label for Daytona Beach. This mistake got fixed when DBpedia was refreshed. If we were dealing with an error like this with linked library data, would a human being who noticed it be able to fix it?

Thanks!

User-generated cataloguing and its expanding role

May 1, 2010 by

Although it provides a set of guidelines for the library world as a whole, “On the Record”, the report of the Library of Congress Working Group on the Future of Bibliograhic Control, was primarily directed at the Library of Congress itself. By June of the same year, the Library of Congress had published a report responding to the recommendations of “On the Record”. That report is available online at http://www.loc.gov/bibliographic-future/news/LCWGResponse-Marcum-Final-061008.pdf; it makes the point early that it “is not an official program statement from the Library of Congress, nor is it an implementation plan”. However, the report is generally enthusiastic about the vision outlined in “On the Record”.

Among the recommendations of “On the Record” were three suggestions in section 4.1, ” Design for Today’s and Tomorrow’s User”, regarding greater integration of library bibliographic data with external sources. The changes discussed are not those related to the possible use of FRBR to make library bibliographic data part of the Semantic Web; “On the Record” deals with user-created content, such as ‘tags’ and user-supplied reviews. The Library of Congress responded to these suggestions by naming a number of small projects focused on similar goals, and resolving to support their work and search out more projects. Amoung the projects named were:

  • The Library of Congress’s own Bibliographic Enrichment Advisory Team (BEAT), which attempts to add data such as tables of contents and reviews to bibliographic records.
  • WPopac Project at Plymouth State University, which supports user-generated tags for records
  • PennTags at the University of Pennsylvania Libraries , similar to WPopac
  • The Library of Congress’s Prints and Photograph Division Flickr project, which allows users to tag photos, with significant guidelines regarding how to tag

These projects already existed before “On the Records” appeared, and the Library of Congress did not propose any new projects in response to the report.

The fate of these projects has been mixed:

  • The BEAT project‘s website has not been updated since 2008, and the project itself is not mentioned in any other articles I have been able to locate since that date; as the website describes it as an all-volunteer project, it’s possible that the volunteers involved have moved on without finding anyone to take up the slack.
  • WPopac has been renamed Scriblio, and is still being worked on; the software is in use by several libraries. It is not clear if Scriblio still supports the addition of user-generated content.
  • PennTags still exists, and the University of Pennsylvania’s Franklin Library online catalog still includes a link to “Add to PennTags” at the bottom of each record. However, I was unable to locate any records which included PennTags content. Either the system has not been utilized enough to make tags available for all or most records, or the tags are only visible to those logged into the website as members of the UPenn community; while only allowing community members to contribute tags makes sense, it seems odd that only community members would be allowed to use them for browsing.
  • However, the Library of Congress’s Prints and Photographs Flickr project has been healthy and successful. In October of 2008, a report was released discussing the progress of the project. At that point, of the close to 5000 images the LOC had uploaded to Flickr, almost all had been tagged, and more than 65,000 tags had been added in total to the collection. Users had also added helpful comments to many photos, including information such as detailed description of locations and events, and links to related photos.

The Prints and Photographs project had one additional outcome that demonstrates the power of user-supplied content: it inspired the creation of the Flickr Commons, a more broad-reaching project to allow Internet users at large to assist in describing and tagging image collections of historical or cultural importance. This project currently includes collections from over 40 institutions, and is an apt demonstration of the power of user-supplied content.

The success of the Flickr Commons compared to other projects involving user-generated content suggests that the guidance suggested in recommendation 4.1.2.3 is a vital component in developing useful user-generated content. Other factors which might have contributed to its success were a connection with an already thriving collaborative community, as Flickr has a large user base and publicized the project heavily, and ease of contributing, as photographs are relatively uncontroversial and easy to identify. Collaborating with existing sites might be a worthwhile strategy to pursue for libraries attempting to add user-generated content.

The “Response” pointed out that “the relationship of entry vocabulary to controlled terms is a challenge for all
catalogs”, and that accordingly, much guidance will need to be provided to allow users to add useful and meaningful tags. It mentioned the extensive guidance provided in the Flickr project as a reason for its success. This is an important point that will need to be remembered. The response also mentions the ongoing debate about pre-coordination and post-coordination of Library of Congress subject headings; the implications of user-generated content as related to subject headings are too numerous to discuss in detail here.

Although the “Response” shows good intentions, it seems that little has in fact been accomplished in response to its recommendations regarding community interaction. The Flickr Commons project arose independently of “On the Record”, although the Commons is aligned with the community-related goals stated in it. However, its success demonstrates that user-generated content can be used effectively to enhance a catalogue, and that, indeed, given a sufficiently large and motivated community can become a catalogue in itself.

Why have more libraries not made the leap to include user-generated content? A possible reason is identified in the “Response” itself, although not as answers to that question: “preserving the library-created data is essential to both access and reuse in the future”. Libraries may be concerned that user-generated content would not be sufficiently differentiated from content originating with catalogers, and that the content might not be of the best possible quality. This is a legitimate concern. However, accepting user-generated data “without interfering with the integrity of library-created data” (as suggestion 4.1.2.1 puts it) is a technical problem, not a systemic problem. Appropriate OPAC interface design should allow segregation by content source, and perhaps even allow users to hide content from external sources if they do not think it helps them locate information.

Included as well under the general heading of “positioning our community for the future” were the broad suggestion of additional testing of FRBR (section 4.2), and a number of specific proposals related to reshaping the Library of Congress Subject Headings (under section 4.3). Proposal 4.3.1.2 was to “Make LCSH openly available for use by library and non-library stakeholders””; 4.3.1.1 was to “Transform LCSH into a tool that provides a more flexible means to create and modify subject authority data.” Again, an existing project was singled out for attention to implement the suggestion: in this case SACO, the Subject Authority Cooperative Program, which provides a means for libraries to submit proposed changes to the LCSH quickly and easily. This project is still ongoing and is working very well; more than 50,000 proposals had been submitted as of January 2010. Currently, only libraries can participate; they are required to have access to LOC’s online authority files. Making LCSH more openly available might facilitate the goal of improving it, as it would allow more public review and suggestions for improvement.

4.3.3.1 suggested creating more linkages between LCSH and other subject authority files; the Response said this was technically unfeasible, although desirable. If the creation of such subject linkages were done with the assistance of interested members of the public, the unmanageably large task might become possible to complete.

The possibilities of user-generated content for enhancing catalogues are well known, and “On the Record” acknowledged and encouraged them. Although the “Response” agreed that the proposals were good ideas, so far little has been done to implement them. In that sense, the report cannot be said to have had results at all. Nonetheless, that it brought these ideas into the public eye and that LOC agreed in principle with their ideas suggests that user-generated data has not been banished from the world of cataloguing – only put aside in the face of ongoing struggles to keep up with technology, and the ongoing RDA debate.

LC and the Semantic Web

May 1, 2010 by

Early in 2008, the Library of Congress Working Group on the Future of Bibliographic Control published the report On the Record.  This Report, directed at both the Library of Congress and the American bibliographic community at large, was to serve as a “’call to action; that informs and broadens participation in discussion and debate, conveys a sense of urgency, stimulates collaboration, and catalyzes thoughtful and deliberate action.” (pp. 3)  The Working Group recognized that the “future of bibliographic control will be collaborative, decentralized, international in scope, and Web-based.” (pp.4)  Pursuant to this assertion, the Working Group made a number of recommendations that pushed libraries ever closer to participation within the Semantic Web.  The Semantic Web, with its inherent organization and utilization of linked data, is an ideal venue for libraries and, more importantly, library data.  By putting library data into the structure of the web itself, thus allowing this data to be used by a wide variety of information communities, libraries can gain a greater sense of relevance and importance in this increasingly digital world.

It is this driving of library practice and standards towards the Semantic Web and the clear articulation of both benefits of the change and the inherent issue in the status quo that is, perhaps, one of the most important outcomes of this Report since its publication.  Yet, this drive has not been without difficulty or setbacks.  While the Report calls for sweeping change and increased collaboration and communication, the actions of the Library of Congress and the reluctance of the library community do not necessarily echo this charge.    As discussed in the Report and on the message boards for this course, the fate of libraries as significant information providers hangs on their ability to follow their users into web.  The third recommendation of the Report exhorts libraries to position themselves and their technology for the future “by recognizing that the World Wide Web is both our technology platform and the appropriate platform for the delivery of our standards.”  (pp. 5)  Additionally, the library community must recognize that “machine applications” are also potential users of “the data we produce in the name of bibliographic control”. (pp.5)  Currently, libraries, via their catalogs, are on the web.  However, the data that libraries produce are locked within their databases.  This data is not in the web in that it cannot be utilized, shared, mashed up, or effectively linked too (or, at least, not with any real ease).  By remaining on the outskirts of what has become a flourishing information and communication platform, libraries do themselves—and their patrons—a great disservice.

The Working Group focuses the efforts of libraries and LC on entering the Semantic Web by advocating for a change in the current standards libraries use to maintain and share their data.  One standard in particular is MARC.  I have already written about what I perceive to be the limitations of MARC in the Semantic Web.  In 3.1.1, the Working Group recognizes that “Z39.2/MARC are no longer fit” standards for metadata and calls on the Library of Congress to “work with the library and other interested communities” to create a metadata carrier that will be amenable to libraries and that will allow libraries to exchange data with other information communities.” (pp.25)  By moving from the MARC “stack” and by actively collaborating with other information communities libraries will be well placed to interact within a web environment.

This is a fairly bold statement, particularly coming from an institution such as the Library of Congress, which  currently holds the responsibility for maintaining MARC21 (pp.7).  While this is something that other librarians or information professionals had been discussing, coming from the Library of Congress, this carrier a certain amount of weight.  Even if LC does not have the mandate (pp.6), and matching funding, to be the “National” library, is has undertaken this role and it certainly leads by example .  In this Report, LC demonstrates its open-mindedness and practicality by looking the future square in the eye.

However, I am unconvinced that LC has made much headway in this area –a movement that I recognize as easier said than done.  While bibliographic utilities such as OCLC [cite] can convert library data into other, more interoperable standards, the library community as a whole is still MARC based.  On April 21st, LC released more information on its testing of RDA.  This testing is still inherently MARC based, with additional fields added to bibliographic records to indicate manifestations, while work and expression will be created as MARC authority records.  I am saddened that LC is not taking the opportunity, with the emergence of the new cataloging code, to perhaps embrace a new carrier, instead of adding more complexity to the already dated MARC standard.  This also, as indicated in class discussion by Elizabeth Mitchell, directly contradicts 3.1.3 of the Report, which calls for the entire library community to “include standard identifiers for individual data elements in bibliographic records.”  MARC currently favors textual strings, not the URIs from which the Semantic Web draws its power.  While I acknowledge that the blow of a new standard might be mitigated by encoding it in the familiar way, this could be a setback for libraries in their effort to enter the web.

The Library of Congress has been much more successful in preparing and releasing its vocabularies in forms more conducive for sharing via the web.  This is a very important and very impressive move, as the controlled vocabularies utilized by libraries is what has lead, in part, to the richness and coherence of our bibliographic descriptions.  The web friendly version of LCSH is located here: http://id.loc.gov/authorities/.  In the “About” section, LC acknowledges the “Linked Data” community and provides a list of other vocabularies or codes that will soon receive the same treatment.  Benefits, for both users and machines, are outlined.  Users can download entire vocabularies in RDF/XML or N-Triples.  Here, LC follows its own suggestions and embraces the power of the URI.  Thus “Fencing coaches” can be found at http://id.loc.gov/authorities/sh93010603, a location based on an alphanumerical string instead of the usual textual string matching.  Other communities in the web now can use this concept via this link or its RDF/XML format, forge a link between this particular URI and similar or related concepts, and generally enhance what is already a pretty powerful tool.  This tool also raises the profile of libraries by not only bringing the data out into the web, but also demonstrating that libraries are now willing and interested in sharing and playing with the rest of the information community.

Yet, this move was not without seeming difficulty and controvery.  LC launched this particular system only after it asked LC cataloger Ed Summers to shut down his own SKOS-generated version of LCSH, formerly located at http://lcsh.info. (More information on the creation of this vocabulary can be found here: http://arxiv.org/abs/0805.2855). Though the version hit the web less than a year after Summers’ site came down, by terminating this innovative service, a service that was already in use by others in the metadata and library communities, the Library of Congress looked somewhat reactionary, if not backwards (http://lcsh.info/comments1.html).   I do understand that LC might want to have more centralized control over bibliographic tools that they developed over years, but I am not fully convinced that this aggregated information, added to for years by librarians around the country, is solely within their domain.  Despite the legal consideration, LC’s action, as a commenter on Summers’ closing point indicated, seemed to fly in the face of the Working Group’s recommendation that LC consider their strengths and priorities and allow others in the community to pick up the slack and innovate for them.  While LC eventually joined the Semantic Web party, so to speak, they made it clear that they would be doing so only on their own terms.  Their actions might also impact innovators who might have, on their own, taken the time to prepare for the Semantic Web other LC tools or tools involving LC data.

Clearly, LC is trying to continue their work in bringing libraries into the Semantic Web and I applaud this commitment.  In furtherance of this goal, I would like to see them move towards adopting or integrating the RDA vocabularies to augment or supplement their existing vocabularies and resources.  Initially in their Report, the Working Group did call for the cessation of RDA development (3.2.5).  This was due to the unsatisfactory business case, a lack of confidence in the benefits of the new code, and a sense that FRBR was perhaps too untried for straight implementation (pp.25).  Later LC moved towards recommending a period of testing instead (See LC’s comment on the Report here).  In the Report, the working committee called for RDA/DCMI collaboration and development of a “Bibliographic Description Vocabulary” (pp.25).  As seen in LC’s response to the Report, they are still committed towards supporting this work and in developing other vocabularies along the same line (LC response, pp.41).By helping incorporate the RDA Vocabularies into RDA testing and implementation, LC could truly start moving towards cataloging in a web environment.  This step could only be improved by including solid efforts towards finding a replacement carrier for data, ideally one that will interoperate with MARC.  While this will undoubtedly be easier said than done, it is necessary to the future of not only bibliographic control, but of libraries themselves.

Responses to “On the Record” – Bringing Awareness

April 30, 2010 by

On the Record (OTR) provided an examination of library services today and a vision for the future, with a focus on the Library of Congress (LC), but with implications for all libraries, nationwide.  The Working Group investigated the current bibliographic control practices and formulated several goals and steps that should be taken to make libraries relevant for the 21st Century.  I want to share my thoughts on one result of this report – that of awareness of the implications of remaining with the current standards and technology and a push to move forward (with a sprinkling in of action, too).

Background

On the Record elicited numerous in-depth responses, crafted to address recommendations in the document.  The report addressed the areas of

  1. Bibliographic production and maintenance
  2. Rare, unique, and other special materials
  3. Positioning technology for the future
  4. Positioning the library and information community for the future, and
  5. Strengthening the Library and Information Science profession.

These five areas were evaluated, consequences of maintaining the status quo explained, and recommendations issued.  The Working Group (WG), made up of individuals from a variety of library and information science arenas, defined three broad principles that were to frame the report: bibliographic control, the bibliographic universe, and the role of the Library of Congress.  The ultimate vision outlined by the WG was that of, “bibliographic control that will be collaborative, decentralized, international in scope, and Web-based.”  As a result of this broad investigation, any public or academic library that looked to the Library of Congress for leadership either 1) became aware of some of the challenges facing librarianship today, or 2) received acknowledgement of the challenges they had already recognized in their own experience.  Generalized action steps were enumerated, but the document did not provide significant direction in regards to practical implementation.

Waking the Sleeping Giant?

It is almost a recognized fact (I say almost…) that AACR2 and MARC cannot fulfill the needs of libraries in the 21st Century.  This is with the assumption that one of the major needs of the library is to be of useful, timely, and relevant service to the user.  The web environment, the proliferation of search engines like Google, and the private enterprises such as Amazon are the first stop for most of the connected world to find their information.  Evaluative rankings, reviews, book cover pictures, and sample chapters are providing the reader’s advisory service from the comfort of the living room easy chair.  However, the WG’s recommendation to halt the work on RDA threatened to make libraries even less connected and relevant to the public than they are now.  Certainly people can look up the catalog online and even access WorldCat, but the entry points are limited and evaluative information non-existent.  The LCWG section on RDA stated:

3.2.1.1 JSC: Suspend further new work on RDA until:

  1. more, large-scale testing of FRBR has been carried out against real cataloging data, and the results of those tests have been analyzed (see 4.2.1 below);
  2. the use and business cases for moving to RDA have been satisfactorily articulated; and
  3. the presumed benefits of RDA have been convincingly demonstrated.

This recommendation was one addressed in almost every response to the document I read even if they did not feel it their response was directly called for.  Seemingly, the section on canceling RDA development may have been designed to push the Library of Congress into reevaluating their timeline and strategy regarding its testing and implementation.

The Library of Congress, as the national leader (although it is not an official national library) is in the position to inform and propel RDA’s development and testing.  And as a result, the report excited a very prompt response from the Library of Congress which outlined four processes that would take place to evaluate RDA.  It also brought together the LC with the National Agricultural Library (NAL) and the National Library of Medicine (NLM) for the purpose of executing these processes.  Together they were to:

  1. Jointly develop milestones for evaluating how we will implement RDA
  2. Conduct tests of RDA that determine if each milestone has been reached; paying particular attention to the benefits and costs of implementation
  3. Widely distribute analyses of benefits and costs for review by the U.S. library community
  4. Consult with the vendor and bibliographic utility communities to address their concerns about RDA.

The final statements addressed the implementation timeline and the commitment to continue the work along side their international colleagues.

As I see it, a significant result of OTR was the push the Library of Congress to commit to testing RDA and put their support behind the idea of testing RDA.  There is nothing like the threat to suspend work that has had much investment to re-energize an organization and propel them to more defined and decisive action.

Other organizations get their say

Along with the Library of Congress, other organizations also had formal responses to On the Record ranging from, “yes, suspend it please,” to “no, this just maintains the status quo.”  These responses were, in my mind, an extremely significant outcome of the report because they brought together individual stances into a cohesive organizational statement with direction, action steps, and criticism.  It gave a voice to segments of the library profession in a formalized manner than the members provide input and help shape their organization’s policy concerning RDA.  Even more so, the Joint Steering Committee (JSC) and the LC would have succinct, collective reports to influence their decision-making.

The Association for Library Collections and Technical Services (ALCTS) Task Group formulated at 16-page Recommendation for Action response which addressed all five sections of the report.  Amazingly, section three, which dealt with technology and RDA was the longest response.  Their stance was the RDA should not be suspended and the LCWG recommendation would, “maintain the status quo.”  Although not explicitly in favor of RDA, they asked for the benefits of maintaining this status quo versus going forward with RDA.  They also questioned the idea of an alternative strategy; who would provide the leadership, and where the collaboration would come from?  Beyond these concerns, the ALCTS Education Committee voiced interest in collaborating on assessing usability and training needs for the purpose of creating RDA training materials, as referred to in the footnote to 3.2.5.2.  Moreover the ALCTS Board said that they could be in a place to provide evaluative assistance for determining what a “business case” for RDA might look like, as well as address the question of trust in the Library of Congress as a whole.  The LCTCS report is a prime example of a collective response meant to 1) address the original report and 2) motivate the association’s membership.

On the other side of the RDA issue was the American Association of Law Libraries (AALL).  The introduction to their response, however did articulate that they had some, “broad comments,” and “specific concerns regarding a number of the recommendations.”  Interestingly, this association wanted Web technology to work around MARC and fully supported the suspension of RDA.  However, what is important is that On the Record got groups of people talking and prompted some formal stances on the issues that were brought forth in the document.  In fact, the ALCTS called 2010 the Year of Cataloging Research as a response to sections of On the Record. (http://faculty.washington.edu/acarlyle/yocr/index.html)

Propelling Discussion towards the Web

The section that discussed RDA was titled, “Positioning our Technology for the Future,” and included comments on data carriers, the influence of the Web, and the use of standard identifiers.  I wanted to throw this in to evaluate the influence of On the Record on OCLC as well as the idea of the Semantic Web.  RDA is not just the proposed replacement for AACR2, but is supposed to help position the cataloging community to enter the online universe, potentially through the Semantic Web.  OCLC’s response to the report was both encouraging and a bit disheartening, indicating some areas mentioned in the report, such as Web access will not be significantly modified in their current plans.  Interestingly, in their response, the only section addressed point-by-point was that of Technology and RDA.  There were some blanket statements, but RDA was not mentioned once.  But it did provide evidence that they knew what was coming down the pipeline and that the LCWG was warning them to get ready for it.  For a comical but serious rebuttal to OCLC’s response, see Rob Styles’ post here.  As Styles’ puts it, “The working group’s draft presents the library world with a rallying point around which it can choose to really move forwards into the internet age…”  It made those who read the report understand that the Library of Congress really saw the Web and public access to information were the next steps in bibliographic control – something that could reach the masses who are connected to the internet.

Conclusion

On the Record provided a good foundation for moving forward with bibliographic control.  It not only assessed where we are now, but where we as Library and Information Professionals want to be in the future.  The Library of Congress is a leader in this nation when it comes to future directions and new initiatives.  This report highlighted the challenges and opportunities coming quickly to the library community.  Even though it was quite vague in its recommendations and provided no implementation steps, it helped to create awareness of the issues within the various library associations.  Even if these concerns had been raised before, it was a significant document which propelled thoughtful response from the community.

References

ALCTS Report

American Association of Law Libraries

ALCTS on behalf of the ALA

Diane Hillmann. Getting There

Diane Hillmann’s response

OCLC Response

Technical Services Special Interest Section

Thomas Mann for the Library of Congress Professional Guild.

On the Record: Constructive Controversy

April 30, 2010 by

This week, as part of our Introduction to RDA, we’ve been thinking about the Library of Congress Working Group’s (LCWG) report on the present and future of bibliographic control, On the Record, issued in January 2008. Diane asked us each to write a blog post on one important outcome of this document, now two years old.

The outcome I’ve identified is constructive controversy. Controversy on its own is not really an achievement unless it leads to useful changes. On the Record not only caused a great storm of discussion, it got lots of different people doing something as a result: the debate led to useful research, just as the report itself recommended (37-38). What I’d like to do here is to discuss the U.S. cataloging environment before and after On the Record was released and then point out some of the interesting projects that have resulted.

Controversy already existed before the LC convened its working group to consider the future of bibliographic control. In April of 2006, the LC shocked the U.S. cataloging community by abruptly announcing that it would stop creating or maintaining series authority records (Chambers and Myall, 91). This decision left a lot of libraries in the lurch, and amid the uproar, LC postponed the plan and organized the working group to consider 21st-century bibliographic control. This working group included people from both inside and outside the library profession, Google and Microsoft as well as academic and public librarians from across the country. The LC has been criticized before for including testimony from people who didn’t really “get” how the LC works (Mann 2006, 17). But everyone seems to agree that libraries are no longer the center of activities in the information universe. If libraries are to reverse this increasing marginalization, who better to help than big players like Microsoft and Google? We don’t want to be Google, but we can surely learn from them.

After a year of public testimony and research, the LCWG compiled their report, and of their 100+ recommendations, some should not have been surprising. Standardization, internationalization, and cooperation to reduce redundant work have a long history as goals of cataloging (Denton, 36). The trouble is that the World Wide Web has changed everything about how we discover and process information, and making the Web the library platform (LCWG, 7) entails a lot of significant changes.  The argument for decentralized, dynamic, and flexible cataloging (LCWG, 1) was harder to swallow for “traditional” catalogers and librarians, not to mention decoupling the Library of Congress Subject Headings (LCSH), getting rid of MARC, and suspending work on RDA (LCWG 35; 25; 29).

The report’s depiction of libraries as businesses also raised objections (Mann 2008, 7-8). It would be nice to think that some institutions are immune to business considerations, but in this day and age, there isn’t a library or a school or a church or any nonprofit that doesn’t have to think about the bottom line and whether people might go elsewhere for its services. To paraphrase what the instructor in my Reference course told us the other day, if we think libraries aren’t in competition with Amazon and Google, we’re dangerously naïve.

I’ve criticized On the Record this week for waffling about the LC’s role. The report says that the LC has historically been the de facto national library, but lacking an actual mandate and accompanying funding, it can’t afford to keep acting like a national library…but on the other hand, the LC takes its recognized leadership role seriously, and it should keep working to make things happen for different constituencies (6-7). Thinking about the outcomes of On the Record, though, has led me to see this report as an important act of leadership on the part of the LC. By convening experts from the information world (not just the library world) and producing this provocative report, the LC helped to focus attention in a vital way that got many people interested in doing something. And although the LCWG and the LC are not one and the same, the LC’s formal response is largely supportive of On the Record’s recommendations (Marcum).

Even one the most vociferous critics of the report echoed its recommendation to do more research. Thomas Mann, after arguing convincingly (I’m always swayed by his arguments) that traditional bibliographic control like the LCSH is essential, especially to scholars, concludes in “’On the Record’ But Off the Track,” with recommendations to pursue more prototypes—research!—for sharing data, such as the LC’s Flickr and Digital Table of Contents projects (36-37).

In their extensive literature review of cataloging scholarship during 2007 and 2008, Chambers and Myall note that “the future of cataloging and bibliographic control was the explicit focus of many contributions” during the time the LCWG was compiling information and just after On the Record was released (93). Their article provides fascinating background to On the Record, showing how many researchers were struggling to find a compromise that allowed cataloging to embrace the brave new world of the Web while also preserving valuable principles of cataloging tradition (93). Many of these projects started before On the Record, so they can’t be seen as direct results. On the other hand, the fact that the de facto national library had initiated such an ambitious research project focused the attention of the U.S. library community in a way that no other scholarship could. Chambers and Myall see On the Record as “a snapshot of where leaders in the library community (as represented by the members of the LCWG) thought we were and where we thought we were going…[and]seemed likely to remain a key document in cataloging and U.S. library history of the early twenty-first century” (92).

The current excitement about FRBR implementation is a case in point. Along with their recommendation to suspend work on RDA, the LCWG also urged more comprehensive testing of the FRBR model (33). This sort of testing was already going on as part of the context of On the Record, and it has continued to flourish, something that’s easy to see from a quick skim of the FRBR blog. Recent work includes the Variations project at Indiana University and work by the Online Audiovisual Catalogers (OLAC) to use FRBR with moving images. I just found an article describing the benefits of “frbrisation” of the Slovenian national bibliography and others (Pisanski, Zumer, and Aalberg). Diane told us a couple of weeks ago about a flurry of FRBR papers and projects submitted to DC-2010, the Dublin Core annual conference. This last development is interesting, since some (incorrectly, I believe) don’t see the relevance of the Dublin Core Metadata Initiative to library notions of bibliographic control (cite). But this is just what On the Record predicted and urged, that research and development of bibliographic control take place all over the information universe. I hope that this flurry of research is productive, and FRBR doesn’t just become the “Ginzu knife of metadata models” as Diane wondered in our class discussion the other day.

In March, when I got an email from Allyson Carlyle announcing that 2010 is the Year of Cataloging Research, I mostly felt wistful that I probably wouldn’t have much time to participate. What I didn’t note then was that this exhortation and challenge was issued by an ALCTS committee (ALA’s Association for Library Collections & Technical Services Implementation Task Group on the Library of Congress Working Group Report) in direct response to the LCWG’s call to “Build an Evidence Base” (37).

Two years after On the Record, the ALCTS folks are encouraging catalogers especially to join the fray and substantially influence the future of their profession. Randy Roeder echoes the LCWG’s judgment that bibliographic “research has lagged behind events and … the knowledge base provides woefully inadequate support for making decisions certain to have a profound effect on the future of libraries and the profession” (2). In the spirit of the LCWG’s recommendations, the ALCTS Implementation Task Group is working to reach beyond the traditional cataloging community to other communities like Dublin Core and the International Society for Knowledge Organization (Carlyle).

Randy Roeder warns that despite their acknowledged expertise in bibliographic control, catalogers who do research will miss the chance to shape the future if they stay focused on traditional topics and fail to step out of the library comfort zone. He argues for library integration in the Semantic Web, and points to a dangerous divide between “visionaries” who are trying to make that integration reality and “most practitioners and managers—groups that produce much of our research” (3). Not just any kind of research will do, according to Roeder: “A Year of Cataloging Research—let’s hope we have the courage to ask the right questions” (3).

As we’ve been discussing so far in this course, U.S. libraries face an uncertain future. Leadership is lacking. But the environment is ripe for positive change. By galvanizing the situation, focusing the discussion, and getting people working to gather evidence, On the Record has earned a place as a seminal document for 21st century librarianship. Let’s hope that this provocation will result in a happy ending for libraries. And if I can just get through this quarter, I’d love to get involved in some cataloging research!

References:

Carlyle, Allyson. 2010 Year of Cataloging Research. 6 Jan. 2010. 26 Apr. 2010 http://faculty.washington.edu/acarlyle/yocr/index.html

—. “Announcing 2010, Year of Cataloging Research.” Cataloging and Classification Quarterly 47.8 (2009). 27 Apr. 2010 http://catalogingandclassificationquarterly.com/ccq47nr8.html

Chambers, Sydney and Carolynne Myall. “Cataloging and Classification: Review of the Literature 2007-8.” Library Resources and Technical Services 54.2 (2010): 90-114.

Denton, William. The FRBR Blog. Weblog. 23 Apr. 2010. http://www.frbr.org/

Library of Congress Working Group on the Future of Bibliographic Control. On the Record: Library of Congress Working Group on the Future of Bibliographic Control. Washington, DC: Library of Congress, 2008. 20 Apr. 2010 http://www.loc.gov/bibliographic-future/news/lcwg-ontherecord-jan08-final.pdf

Mann, Thomas. “’The Changing Nature of the Catalog and Its Integration with Other Discovery Tools. Final Report. March 17, 2006. Prepared for the Library of Congress by Karen Calhoun.’ A Critical Review by Thomas Mann.” Review prepared for AFSCME 2910, The Library of Congress Professional Guild (2006). http://www.guild2910.org/AFSCMECalhounReviewREV.pdf

—. “’On the Record’ but Off the Track.” Report prepared for AFSCME 2910, The Library of Congress Professional Guild (2008).  http://www.guild2910.org/WorkingGrpResponse2008.pdf

Marcum, Deanna B. Response to On the Record: Report of the Library of Congress Working Group on the Future of Bibliographic Control. Washington, DC: Library of Congress, 2008.  29 Apr. 2010 http://www.loc.gov/bibliographic-future/news/LCWGResponse-Marcum-Final-061008.pdf

Moving Image Work-Level Records Task Force. Online Audiovisual Catalogers. 26 Apr. 2010 http://www.olacinc.org/drupal/?q=node/27

Pisanski, Jan, Maja Zumer, and Trond Aalberg. “Frbrisation: Towards a Bright New Future for National Bibliographies.” International Cataloguing and Bibliographic Control 39.1 (2010): 3-6.

Roeder, Randy. “A Year of Cataloging Research.” Library Resources and Technical Services 54.1 (2010): 2-3.

Variations/FRBR: Variations as a Testbed for the FRBR Conceptual Model. 5 Nov. 2008. Indiana University Digital Library Program. 11 Apr. 2010 http://www.dlib.indiana.edu/projects/vfrbr/index.shtml

RDA’s ‘Legacy’ Approach

April 19, 2010 by

Hillmann and Coyle’s article on the problems of RDA (available at http://www.dlib.org/dlib/january07/coyle/01coyle.html) raises a number of important points. However, unlike at least one classmate, I disagree with their conclusion that a fundamental restructuring is necessary. By using ‘legacy’ forms, RDA becomes a development of cataloging standards, rather than a completely new form of cataloging; the more different RDA is from existing forms, the more difficult the transition will be. The article was written in 2007, and since then, the standard has changed; however, the idea that RDA is ‘not different enough’ seems to be alive and well.

The other idea that is alive and well is that RDA will be too difficult to implement; that the cost of a subscription alone will prevent small libraries from using RDA, and that the complexity will scare off potential users (see for example http://carolslib.wordpress.com/2010/02/11/promises-promises/). I don’t think that complexity is necessarily a problem in itself. As a number of commentors have pointed out (for example, Roy Tennant at http://www.libraryjournal.com/blog/1090000309/post/1520052752.html) cataloguing as it exists is already very complex; MARC has fields that very few people understand or use, and more are being added to deal with RDA-specific information. The problem comes when cataloguers familiar with AACR2 are asked to learn a new system from scratch, when the benefits of it are not readily apparent. Problems will also arise when attempting to convert existing records, if the differences between the two systems are too great; as Tennant’s post points out, some of the extra complexity is to assure ‘a smoother transition’.

It’s great to expand the cataloging universe, but we shouldn’t forget that most of a library catalog’s contents are already present and accounted for. I wince when I see articles discussing how the catalog must change if it’s to compete with Google. It isn’t. Google serves its functions well, but for anything beyond known-item searches – that is to say, for a website you’ve already visited, or that you know must exist even if you’re unsure of the URL – it’s far less than ideal, a fact all but the most casual users are already aware of. Google even has its own version of the ‘multiple-versions problem’, in the form of webpages that borrow content from Wikipedia; unlike library catalogs, it has made no attempt to solve it, and seems doomed to index the web at the ‘item’ level and only the ‘item’ level. Catalogs have a much smaller world to index, and do it better. The structured format of catalogs allows more precise searching; subject headings are even better than keyword searching at precise retrieval.

There are problems with the content of catalogs as they exist, but they do exist, and do their job well; transitioning to a new format is a Herculean enough task without tossing out what we already have. Burton stated in https://learningaboutrda.wordpress.com/2010/04/17/lets-be-brave/ that, “Trying to fit RDA (or any new standard) into the old infrastructure seems like a waste of time, money, and brainpower in the long run because MARC will limit what we can do.” Similarly, https://learningaboutrda.wordpress.com/2010/04/16/a-dear-marc-letter/ raises problems with the very idea of MARC because its foundations in the card catalog left so much of a mark. (Pun not intended.)

The trouble is that new solutions will have to be built on the old infrastructure. MARC can, and should be, expanded to deal with additional types of information, without reducing its present usefulness. Backwards-compatibility is an important goal for software design, because users of a new version will inevitably have old files they need to keep using. We already have library catalogs, and the contents of existing catalogs will continue to be the resources most catalog users are searching for. By sticking closely to current standards, RDA is ensuring that instead of a ‘clean break’, we can have a smooth, effortless transition.

ALA and DNB Responses to the Full Draft of RDA

April 17, 2010 by

I took a closer look at the formal responses and recommendations given to the November 2008 full draft of RDA. In particular I was interested in the responses from the ALA and the German National Library (DNB). The responses leave the impression that real interest in seeing the stated goals of RDA come to reality is endangered by intense frustration with the failure of the draft RDA to move toward its own goals. The tone of the responses differs (the ALA overtly critical of the quality of the draft and problems with the draft review process, the DNB diplomatic in its substantive, but politely phrased, comments), but the responses are notable for their apparent agreement on key aspects of what is valuable in RDA. In particular there is clear support for collaboration involving Dublin Core and ONIX and the steps needed to take advantage of the RDA element set and vocabularies.

The DNB response states:

“We welcome the close cooperation of the JSC with the main metadata standard communities like MARC 21, Dublin Core, and ONIX groups. We also welcome the activities regarding a registry for the RDA vocabulary.” (p.3)

ALA argues somewhat more directly that:

Finally, the collaborations with the ONIX and DCMI communities have already yielded what may turn out to be some of the most significant products of the RDA project.”(p.2)

The indication is that these are not just the salvageable remnants of an otherwise lost cause, but that the elements and vocabularies are the core around which the rest of RDA needs to revolve. It seems that while the text of RDA remains in flux, the registration of elements and vocabularies (opening them up to practical use) is an area where real progress has been made.

Both ALA and DNB push in their responses for RDA to move more dramatically in the direction of the semantic web, though the DNB seems more emphatic in their concern for this issue, making the case that:

“At the conceptual level, RDA is a step in the right direction but without a connection to the Semantic Web it will be irrelevant outside the library world.”(p.2)

As one of the expectations of RDA is that it will make library data relevant to other data communities, this is one of the central areas where both ALA and RDA indicate that the draft did not live up to expectations.

Making the code amenable to use internationally by non-English communities, and consistent application of concepts and terms from FRBR and FRAD, are two of the other central areas where both responses clearly think that the draft did not live up to its own aspirations. While it remains to be seen how well these concerns have been taken into account in subsequent revisions of RDA, they are two of the points that seem likely to require ongoing movement and pressure well into the future.

It may well be that the only way to make sure that RDA comes to something is for a few visionaries to strike out and build working examples for the rest of the library world to look at. I am particularly interested in the effort of the DNB to go the extra mile to make a contribution to RDA, pushing for it to really be meaningful in the long-term and in a global context. It is hard to tell if their efforts will result in major textual changes, but their willingness to take the initiative in running with the most useful parts of RDA appears to be exactly what we need. Just today there was an announcement on NGC4LIB listserv about the opening of a DNB linked data prototype making use of RDA. This is the type of project that will tease out the practical possibilities of RDA, and hopefully put pressure on US institutions to take their own courageous steps out into the void.

——————————————————————————————————-

ALA/CC:DA. (2009, February 9). RDA: Resource Description and Access – Constituency Review of Full Draft. Available at: http://www.rda-jsc.org/docs/5rda-fulldraft-alaresp.pdf

Office for Library Standards, German National Library. (2009, February 2). Comments on “RDA – Resource Description and Access” – Constituency Review of November 2008 Full Draft. Available at: http://www.d-nb.de/standardisierung/pdf/comments_rda_full_draft.pdf

Denton’s historical contextualization of FRBR

April 17, 2010 by

Denton’s article “FRBR and the history of Cataloging” does a great job of assessing the history of cataloging and contextualizing the development of FRBR as a product of this history, rather than a new and unaffected standard.

The thing that struck me about this article was that Denton manages to take a very long tradition of cataloging and classification theory and boil it down in a way that makes it not only accessible but relevant in terms of what we as librarians try and accomplish today.  By sticking to the basic principles of access and service that unite all the different theories and principals he summarizes that FRBR is just a continuation this tradition. I find that his keeping the discussion at this level it was extremely helpful for me, a budding cataloger who has a long way to go before understanding it all.

I think the point he brings up about FRBR being developed out of a long and rich history of cataloging are especially pertinent in light of what Maggie Dull has said in her “Dear Marc” post about catalogers glorification of MARC and AACR2.  By holding these standards as the epitome of cataloging it becomes very hard for change to be implemented. I see this as the problem of people taking cataloging  standards out of their long term historical context, and I think more people need to think about FRBR in the way that Denton does.

I think that any profession that has settled into a routine of doing things certain way will have problems when change arrives.  Many people want to either ignore the change it or take issue with it. I currently work for a financial compliance office which is having trouble getting the financial advisers to turn in paperwork that is compliant in terms of federal regulations. Because the government is cracking down on the financial industry right now, it’s becoming a real issue. Older and seasoned financial advisers don’t see the issue in context of the changes that are happening in the profession and in society, and many of them are just going about business as usual because they don’t see the problem in doing so. In her post on Catalyst for module 2, Maggie Dull made a good point about how if people don’t find in fault in their current tools they will not be as receptive to new ones.

I think that the problem is exacerbated by the fact that more and more cataloging departments are primarily relying on paraprofessionals, rather than librarians, to do the majority of the cataloging. I think many cataloging department heads dread the idea of having to re-train all of their staff on a new set of rules. While Denton’s contextualization does a great job for helping people to understand FRBR, I don’t think it puts a dent in that dread of re-training people who’ve been living by AACR2 and MARC day in and out for years.

Denton, William. “FRBR and the History of Cataloging.” In Understanding FRBR: What It Is and How It Will Affect Our Retrieval Tools, Arlene Taylor (ed.). Accessed at: http://pi.library.yorku.ca/dspace/handle/10315/1250


Follow

Get every new post delivered to your Inbox.

Join 37 other followers