RDA’s ‘Legacy’ Approach


Hillmann and Coyle’s article on the problems of RDA (available at http://www.dlib.org/dlib/january07/coyle/01coyle.html) raises a number of important points. However, unlike at least one classmate, I disagree with their conclusion that a fundamental restructuring is necessary. By using ‘legacy’ forms, RDA becomes a development of cataloging standards, rather than a completely new form of cataloging; the more different RDA is from existing forms, the more difficult the transition will be. The article was written in 2007, and since then, the standard has changed; however, the idea that RDA is ‘not different enough’ seems to be alive and well.

The other idea that is alive and well is that RDA will be too difficult to implement; that the cost of a subscription alone will prevent small libraries from using RDA, and that the complexity will scare off potential users (see for example http://carolslib.wordpress.com/2010/02/11/promises-promises/). I don’t think that complexity is necessarily a problem in itself. As a number of commentors have pointed out (for example, Roy Tennant at http://www.libraryjournal.com/blog/1090000309/post/1520052752.html) cataloguing as it exists is already very complex; MARC has fields that very few people understand or use, and more are being added to deal with RDA-specific information. The problem comes when cataloguers familiar with AACR2 are asked to learn a new system from scratch, when the benefits of it are not readily apparent. Problems will also arise when attempting to convert existing records, if the differences between the two systems are too great; as Tennant’s post points out, some of the extra complexity is to assure ‘a smoother transition’.

It’s great to expand the cataloging universe, but we shouldn’t forget that most of a library catalog’s contents are already present and accounted for. I wince when I see articles discussing how the catalog must change if it’s to compete with Google. It isn’t. Google serves its functions well, but for anything beyond known-item searches – that is to say, for a website you’ve already visited, or that you know must exist even if you’re unsure of the URL – it’s far less than ideal, a fact all but the most casual users are already aware of. Google even has its own version of the ‘multiple-versions problem’, in the form of webpages that borrow content from Wikipedia; unlike library catalogs, it has made no attempt to solve it, and seems doomed to index the web at the ‘item’ level and only the ‘item’ level. Catalogs have a much smaller world to index, and do it better. The structured format of catalogs allows more precise searching; subject headings are even better than keyword searching at precise retrieval.

There are problems with the content of catalogs as they exist, but they do exist, and do their job well; transitioning to a new format is a Herculean enough task without tossing out what we already have. Burton stated in https://learningaboutrda.wordpress.com/2010/04/17/lets-be-brave/ that, “Trying to fit RDA (or any new standard) into the old infrastructure seems like a waste of time, money, and brainpower in the long run because MARC will limit what we can do.” Similarly, https://learningaboutrda.wordpress.com/2010/04/16/a-dear-marc-letter/ raises problems with the very idea of MARC because its foundations in the card catalog left so much of a mark. (Pun not intended.)

The trouble is that new solutions will have to be built on the old infrastructure. MARC can, and should be, expanded to deal with additional types of information, without reducing its present usefulness. Backwards-compatibility is an important goal for software design, because users of a new version will inevitably have old files they need to keep using. We already have library catalogs, and the contents of existing catalogs will continue to be the resources most catalog users are searching for. By sticking closely to current standards, RDA is ensuring that instead of a ‘clean break’, we can have a smooth, effortless transition.


One Response to “RDA’s ‘Legacy’ Approach”

  1. maggiedull Says:


    Thank you for this post. As someone who tends to sit towards the “not enough change” side of things, I really appreciate your cogent perspective. You make some excellent points about the necessary transition between our existing cataloging environment and the one seemingly called for by the implementation of RDA. I agree wholeheartedly that backwards-compatibility is necessary to utilize the millions of MARC/AACR2 records already in use. To do otherwise would be a waste of decades of effort and care. And whether or not MARC is up to the challenge of being that bridge is clearly up to debate, as you’ve shown by our discussion on it and discussions on the standard elsewhere in the blogosphere.

    But I am wondering about your take on Googlizing catalogs or rather the movement towards developing catalogs or catalog interfaces to compete with Google. I really like how you identified Google’s own “multiple-version” problem. You’re absolutely right that the power that things like Authority Files and LCSH bring to searching simply cannot be duplicated by Google in its current form.

    But there is something about Google that is attracting our patrons. In the reading for this week, “On the Record” (http://www.loc.gov/bibliographic-future/news/lcwg-ontherecord-jan08-final.pdf) talks about this very issue. Users are going to things like Google or Amazon first and the library catalog second (pp. 31). I can’t tell you the number of ILL requests I’ve processed where the “Cited in” says “Google” or “Google Scholar”, from both undergraduate and professors. It could be the one stop search box, the relevancy rankings, the immediate linking to full text – I can’t say, though the literature might.

    Instead of making the catalog more like Google, what if we tried to make Google more like a catalog? What if librarians brought what we do best – our authority work, our subject work, our vocabularies, all of the intellectual expertise- to bear on the web? If we just turn our catalogs into keyword indexes than, you are so right, all of that is wasted. I think that’s what people mean when they’re talking about opening libraries up to the Semantic Web – it doesn’t necessarily make us weaker, but makes us more relevant and the web stronger. To me that’s a win-win, though it will take a lot of work to get our standards and data ready to make that transition.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: