YourSpace, MySpace, DSpace? Finding a Place for Institutional Records
Sessions got off to a great start this morning, at least for me: the first session I attended, was really thought-provoking. Tim Pyatt (Duke), Erin O'Meara (University of Oregon), and Nancy Deromedi are all using DSpace, an open-source digital repository, to house access (but not preservation) copies of archival university records, government documents, and other materials.
Although Pyatt, O'Meara, and Deromedi saw DSpace as a valuable access tool, they noted that it was developed to capture the grey literature created by university faculty--the papers and other research products that don't go through peer review or formal publication but which contain information warranting preservation--and wasn't really designed for archival materials. As a result, it doesn't readily accommodate the contextual information (data about records creators, relationships between records, etc.) that enables users to make sense of archival records. Moreover, it requires manual entry of descriptive information about each file placed within it--a real challenge for archivists responsible for managing thousands of files.
Erin O'Meara noted that DSpace's limitations forced her both to be creative in figuring out how to associate contextual information with records and to ponder whether she was creating this information because her archival background demanded it or because users actually needed it. Nancy Deromedi and several audience members concurred that it may be time to rethink our descriptive practices and to focus on providing the item-level access users want.
However, it was Pyatt and O'Meara's responses to an audience member concerned that archivists are creating multiple access systems for electronic records in various formats that really got me thinking. Pyatt noted that we're simply not at the point where we have a single system that can provide all types of electronic records. O'Meara then questioned whether we've ever had a single system for providing access to records of any type: we've all inherited legacy systems--paper files, index cards, ordering schemes--and in many cases we can't integrate them into one overarching system. I started thinking about the multiple and overlapping legacy systems developed at my repository. We continue using some of them even though we've clearly outgrown them, and we're making only halting progress toward building the integrated system that we need. I think most, if not all, archives are in the same boat.
I then started thinking about something that Pyatt said earlier in the session: when he and his colleagues were planning to build Duke's institutional repository, researched the various options for doing so--DSpace, FEDORA, Greenstone, Eprints, various commercial applications--and determined that all of them were somehow fatally flawed. He didn't elaborate, and I didn't get the chance to follow up with him, but I suspect that the flaws stem from the simple fact that these applications were all designed to meet the needs of libraries, not archives.
To make a long story short, those us who work with electronic records typically use a variety of overlapping systems that were built with specific purposes in mind, often fail to exchange information easily, and don't always meet our current needs. We also spend a lot of time adapting tools and practices designed for the library community to meet our own needs. In a lot of ways, the new world of electronic records isn't new at all. Maybe it's time for us to stop making do and start designing systems that truly meet our current needs; doing so would require substantial institutional (or, preferably, multi-institutional) commitment and, in all likelihood, substantial grant funding, but the end result might be worth it.
Convergence: R(e)volutions in Archives and IT Collaboration
Another good session. Phil Bantin (Indiana University), the panel chair, noted at the outset that IT folks don't view archivists and records managers as "players" because they don’t know what we can contribute to system design, etc., and that we need to work on winning small victories that will eventually lead others to recognize what we bring to the table. Rachel Vagts (Luther College) and Jennifer Gunter King (Mount Holyoke College) discussed how the merger of the library and IT departments at their institutions benefited their archives. Daniel Noonan (Ohio State) discussed how he was able to leverage concerns about e-discovery and users' lack of knowledge about the differences between archiving and backing up files to establish relationships with IT staff; however, owing to IT staffers' reluctance to avoid "scope creep," he hasn't been able to get involved in existing system design projects.
However, the real standout was Paul Hedges (Wisconsin Historical Society), who started out as an archivist but eventually became head of IT at his repository. He emphasized that archivists are like most people in that they see IT as responsible for maintaining basic services such as e-mail but that they should see IT as a strategic tool that will further their mission and aims. He also emphasized that archivists need to educate themselves about the basics of IT and that IT personnel needed a basic grasp of archival terms and concepts--and that, in his experience, it's been far easier to explain archival concepts to IT people than it is to explain IT concepts to archivists. In his view, archivists need to start reading Government Computing News, eweek, etc., so that they know, in a general way, what IT folks are concerned about and become familiar with IT acronyms and terminology. They also need to start going to IT conferences--even if it means they have to skip archival conferences in order to do so--and learning about IT departments' stated missions and goals.
CONTENTdm Brown Bag Lunch
I attended this lunch so I could meet the awesome Erik Mayer from OCLC, who was there to outline recent improvements to CONTENTdm, OCLC's digital collections management application. I've spoken to Erik many times over the phone and have exchanged hundreds of e-mails with him, but this is the first time we've actually met. He's as delightful and helpful in person as he is online, and he had all kinds of interesting things to say about OCLC's new Web crawler and CONTENTdm, which is due for a really promising upgrade.
Digital Dilemmas: Dealing with Born-Digital Surrogate Audio and Audiovisual Collections
I attended this session because a colleague who is responsible for overseeing the digitization of our multimedia holdings couldn't come to SAA this year. The technical presentation given by George Blood (Safe Sound Archive) was fascinating but a bit over my head, but he and Angelo Sacerdote (Bay Area Video Coalition) identified a number of resources that I'll pass on to my colleague. I'll also let her know about the Monterey Jazz Festival audio and video digitization project; Hannah Frost (Stanford University) highlighted some of the technical problems she encountered as the project unfolded and discussed how the digitized recordings will be made accessible to the public.
Friday, August 29, 2008
Subscribe to:
Post Comments (Atom)
2 comments:
Thanks for posting this overview of some very interesting sessions at the SAA conference. I look forward to reading more of your posts about archives and electronic records. Will you be adding more of your own opinion on the field and some solutions for problems in the field?
P.S. Your link to ArchivesNext on your blog roll has too many https :)
Thanks, kitcatreb! I do plan to use this blog as a forum for some of my thoughts on archives, archivists, and some possible solutions to our problems. However, I may not do so right away. I've committed myself to blogging throughout the entirety of SAA, and it's really wearing me down. I'll probably take a few days off to recharge when I get home.
I fixed the ArchivesNext link. Mea culpa.
Post a Comment