Showing posts with label archival description. Show all posts
Showing posts with label archival description. Show all posts

Thursday, October 7, 2010

Best Practices Exchange, day three: Vermont functional classification

Crested saguaro in front of Old Main, University of Arizona, Tucson, Arizona, 1 October 2010. As its name suggests, Old Main, which was built in 1891, is the oldest building on the campus. It now serves as the university's admissions office.

I know that this is a really tardy post, but I haven't had much time for this blog as of late. Immediately after the 2010 Best Practices Exchange (BPE) ended last Friday, I rented a car and headed to see an old friend in Tucson that I hadn't seen in (her word) hmmty years. I flew out of Phoenix last Saturday morning, and between overfull flights and an overfull head I wasn't able to write anything. When I got back to Albany, I needed to focus on digging my way out from under an avalanche of work, taking care of some last-minute Capital Region Archives Dinner stuff, and getting ready to go on vacation. This post was written at ALB, mid-air between ALB and DCA, and DCA (which is fast becoming my least-favorite airport), and posted from my parents’ house in Ohio. My parents and I are heading to my aunt's Internet-free house in West Virginia tomorrow morning, so I'm going to be disconnected for a little while. I'm actually kind of looking forward to it.


Every BPE session I attended was interesting and worthwhile, but Policy and Administration 7 was particularly thought-provoking. It centered upon two very different but equally compelling initiatives: the functional classification infrastructure developed by the Vermont State Archives and Records Administration (VSARA) and a grant-funded University of North Carolina at Chapel Hill effort to create a joint master's degree program in library/information science and public policy. In lieu of writing a single, monster post, I'm going to discuss Vermont's work in this post and the University of North Carolina at Chapel Hill project in a companion piece. (NB: I've gotten permission from the presenters to discuss these projects, so names will be named and details will be detailed.)

Tanya Marshall noted that VSARA's distinctive approach to appraisal is rooted in its newness: VSARA was established in 2003 within the Secretary of State's Office, and it acquired records management responsibilities in 2008. The newly hired staff had a deeply felt need to assess the structure and functions of state government and to identify important records held by agencies. They also had to contend with a large volume of records that the Secretary of State had acquired in past decades. They quickly realized that these records were broken down into series that were actually accessions: for example, driver's license records created in the 1900's were classified as a series, and identical records created in the 1910's were classified as a completely separate series.

As Marshall and her colleagues began researching the structure and functions of state government and began compiling the results of their research, their objectives gradually evolved. They sought to:
  • Establish intellectual control over their existing holdings
  • Study state government by focusing on its parts
  • See the "big picture" of state government from multiple vantage points
  • Develop an objective strategy for documenting state government functions, legislation, and agencies over time
  • Capture and reuse staff research, especially stable information such as legislative acts and dates of creation
  • Develop a balanced and consistent appraisal approach
  • Document recordkeeping decisions
  • Create reports and other resources as consistently and as efficiently as possible
  • Develop the ability to export and reuse data in various ways -- including ways not yet envisioned by staff -- and to conform to ISO 15489 and other standards
The Vermont Functional Classification System (VCLAS) that Marshall and her colleagues developed uses standardized terminology to record information that breaks down the complexities of government into its constituent parts:
  • Legislation
  • Public agencies
  • Areas of accountability (also called domains)
  • Activities (e.g., permitting, licensing)
  • Transactions
Each of these areas is further broken down into facets that support many different types of analysis, and VSARA staff can use VCLAS to do a number of really interesting things:
  • Identify agencies that are or were engaged in specific activities. In addition to supporting VSARA's internal needs, this capacity can help VSARA supply information to others. For example, several years ago, officials who wished to examine the state's permit-issuing activities were impressed by VSARA's ability to identify, with little advance notice, all of the state agencies that issued permits
  • Analyze activities to determine the types of records likely to be held by an agency. Staff have discovered that activities tend to generate the same types of records regardless the creating agency or area of responsibility, and in many instances they can generate macro-level inventories of the types of records that a given agency likely holds and then work with agency personnel to determine whether the records actually exist and are being managed properly
  • Conduct functional analyses of related activities, including those that are performed by more than one agency
  • Analyze domains and activities to identify records that most clearly warrant long-term preservation
In the future, VCLAS may also help staff conduct functional analyses that:
  • Identify electronic records that warrant permanent preservation but are at risk of being lost
  • Identify current and planned electronic recordkeeping systems that will house electronic records of enduring value and work with the state Chief Information Officer to ensure that these systems manage the records properly
  • Enable VSARA to supply records creators with some basic metadata about the electronic records in their possession.
Cool stuff.

Throughout Marshall's presentation, I couldn't help but think that my own repository already gathers a lot of the data that VSARA collects and adds to VCLAS -- information about agencies' statutory mandates, organizational structure, core responsibilities and activities -- but some of it is collected by appraisal archivists and some of it is collected by reference/description archivists, and different elements reside in different systems. I suspect that most other state archives are in the same boat -- and that most, if not all, of us would benefit from giving the work of our Vermont colleagues a very close look.

Monday, August 16, 2010

CMSWire SAA 2010 recaps

Six of the forty-six Louis Saint-Gaudens centurions standing guard over the Main Hall of Union Station, Washington, DC, 15 August 2010, 11:42 AM.

FYI, Mimi Dionne has written excellent recaps of four sessions held at the 2010 joint CoSA-NAGARA-SAA meeting for CMSWire magazine:

Sunday, June 6, 2010

NYAC/ARTNY: Archivists' Toolkit

The Hudson River, as seen from the grounds of the Vanderbilt Mansion National Historic Site, Hyde Park, New York, 4 June 2010.

Last week, I attended the joint meeting of the New York Archives Conference (NYAC) and the Archivists Roundtable of Metropolitan New York, (ARTNY) which was held at Marist College in Poughkeepsie. Unfortunately, Mac-using attendees discovered upon arrival that, despite Marist’s promises to the contrary, they could not connect to Marist’s wireless network. Now that I’ve reconnected, I’ll put up a couple of posts about the highlights of this year’s conference.

In my view, the best session of the conference was Session 1, “Implementing, Modifying, and Teaching the Archivists' Toolkit.” The Archivists’ Toolkit (AT) is an increasingly popular open source tool that supports accessioning, location management, and description of archival materials, and the session itself attracted a capacity crowd.

Janet Bunde of New York University (NYU) discussed a recent effort to integrate the AT into NYU’s Advanced Archival Description course so that students, who typically lacked the funds needed to attend AT workshops sponsored by the Society of American Archivists, would become familiar with the tool and hone their descriptive skills. The students reviewed the AT user’s manual in advance, then devoted an entire class session to entering sample data into the AT. At the end of the class, students discussed where they entered specific data elements and the descriptive output that resulted. Although the discussion wasn’t as extensive as Bunde would have liked, it shed light on students’ descriptive choices and revealed that, despite the use of some odd terminology, the AT’s interface is relatively intuitive.

Bunde stressed that this exercise didn’t, in and of itself, teach archival description, but it made me think about how to do so. I created a handful of MARC records while working as a student assistant, but I really didn’t feel comfortable with description until I found myself responsible for reviewing MARC records created by archivists at other repositories. I soon acquired an intimate knowledge of MARC and the ability to differentiate between acceptable variations in local practice and out-of-bounds tag usage. I really like the idea of having students openly compare and defend their descriptive choices, and using the AT as a teaching tool has real promise, particularly if, as NYU plans to do this fall, it’s incorporated more fully into the course curriculum.

Deena Schwimmer of Yeshiva University discussed how her repository, which has only two professional staffers and few IT resources, used the AT to centralize, as quickly as possible, holdings and descriptive information about its manuscript collections. Working with a clerical assistant, Schwimmer first culled holdings information from donor files and the relatively small number of MARC records describing the collections and entered it into the AT. Then, working in tandem with an intern who created collection-level descriptions, she used the AT to create Encoded Archival Description (EAD) finding aids that contained only the most basic descriptive elements: Biographical/Historical Note, Scope and Content, Abstract, Conditions Governing Access, Conditions Governing Use, and Language of Materials, and Title and Date information. She also used the AT to manage the project: she added fields that identified whether an EAD finding aid had been produced and enabled her and her intern to exchange notes about specific collections.

Schwimmer’s project exemplifies what a single results-minded archivist can do with a well-chosen tool and a little student and clerical help. Before Schwimmer’s project began, approximately a third of Yeshiva’s 2500 linear feet of manuscript holdings had been described, and when the project wrapped up roughly 18 months later, every collection had at least a basic finding aid. I think we’re going to see lots of similar AT success stories during the next few years, and, needless to say, I think that this is a very good thing.

Marisa Hudspeth of the Rockefeller Archive Center (RAC) then discussed how her repository is building a new AT reference module that will both meet its needs and enable it to, via release of the module’s source code and documentation, give back to the archival community. The RAC had been using a proprietary tool that supported patron registration and tracking of duplication services, but moved to the AT because of its robust collections management and descriptive modules. When it became apparent that the AT development team's energies were focused elsewhere, the RAC decided to hire several former team members and build a reference module itself.

When it’s completed, the reference module will perform the following functions:
  • Patron registration: will track research visits, publications, completion of necessary research paperwork, and research awards; and facilitate generation of statistics and reports.
  • Duplication services: will manage all types of requests; create standardized invoices in PDF; store fee schedules and shipping rates and automatically calculate totals; track service requests; generate statistics and reports; and securely manage payment information.
  • Retrievals, bar-coding, and use tracking: will track use of materials by patrons; generate statistics and reports; automate the charge-out procedure using barcoding; add barcoding functionality to the AT’s Accession module; support printing of barcodes and box labels; and enable both archivists and researchers to submit pull requests electronically via clicking on boxes in the RAC’s EAD finding aids.
  • Reference requests and reading room scheduling: will electronically distribute reference requests to staff; allow staff to respond to requests within the AT; store request histories, staff assignments, and responses; generate statistics and reports; and enable archives that have limited research facilities to manage scheduling of research appointments and factor in holiday closings, weather incidents, and other events.
  • Personalized user accounts: will enable patrons to update their contact information, submit reference requests, schedule and cancel research appointments and sign up for waiting lists; receive notifications of closings and research room vacancies; sign up for newsletters and the like; view an orientation video and agree to the RAC’s terms of use; track the status of their duplication requests; review their own request histories; bookmark and comment on finding aids; submit funding paperwork; electronically sign forms, and, if they wish to do so, connect with other researchers.
At present, the RAC doesn’t know how this reference module will work with ArchivesSpace, which will, when completed, merge the AT and Archon, another open source archival data management system. However, the RAC will release the code and continue using it, even if the module can’t be incorporated into ArchivesSpace.

After this session ended, I was talking to a colleague about the RAC’s work, and we were both struck by the degree to which reference-related information systems remain paper-driven -- not only at our repository but also at many, many others. Our own repository is currently developing some of the functionality that will be included in the reference module (e.g., barcoding and use tracking), but we’re still terribly paper-centric. The RAC’s work ought to help propel the move away from paper, and it’s going to be really interesting to see how this exciting project pans out.

If you are an AT user and want to track reference requests, duplication services, etc., electronically, the RAC is looking for reference module beta testers. The module’s first component -- patron registration -- should be finished within a few weeks, and the entire module has a scheduled completion date of 31 December 2011, so things are moving right along. If you're interested in serving as a beta tester, contact Marisa Hudspeth at mhudspeth-at-rockarch.org.

Wednesday, November 4, 2009

MARAC Fall 2009: S6, EAD Perspectives at the Institutional, Research, and National Level

Moon over Manhattan, as seen from the Newport, Jersey City esplanade, 4:50 PM, 29 October 2009.

Post corrected 7 November 2009. I was sitting in the very back of the room in which S6 was held, and sometimes had trouble hearing the presenters. I completely misheard a couple of things that Michael Rush said during the start of his presentation, and this post contained some inaccurate information as a result. Thanks to Mike for setting me straight, and apologies all round.

I’m not doing a ton of description these days, and but I cut my professional teeth on Machine Readable Cataloging (MARC) records and have lots of colleagues who are still doing a lot of MARC and Encoded Archival Description (EAD) work, so I always make it a point to attend conference sessions relating to description whenever possible. I’m glad I caught this one.

Michele Combs (Syracuse University Special Collections Research Center) opened the session by outlining the internal and external benefits of EAD, technical options for creating and providing access to EAD finding aids, and how her repository has integrated EAD into its workflow. I particularly liked her discussion of SU’s More Product, Less Product (MPLP)-influenced approach to description: Combs and her colleagues create EAD finding aids for new collections during the accessioning process, and they’re tackling the backlog by converting paper finding aids to EAD and using existing MARC records to generate basic EAD finding aids. As a result, every collection gets at least a basic EAD finding aid.

Jeanne Kramer Smyth (Discovery Communications and, BTW, the force behind Spellbound Blog) discussed ArchiveZ, an information visualization project that uses EAD finding aids from a variety of institutions as a source of structured data. Focusing on subjects, time periods, and linear footage, Kramer-Smyth and her associates normalized the data and decomposed compound subjects into tags; the latter dramatically increases the chances of finding overlapping collections. They also cross-tabulated subjects and time periods to identify the volume of records covering a given subject at a given time.

This is very cool stuff that promises to open up all kinds of new avenues of access, but Kramer-Smyth and her colleagues have run into a few problems, almost all of which stem from the flexibility inherent in the EAD specification. Each repository that provided finding aids to the ArchivesZ project had its own encoding quirks and particularities, and standardization across certain tags was lacking; for example, some repositories measure quantities of records in linear feet, while others use cubic feet, etc. Some of the finding aids had incomplete subject assignments (e.g., subjects reflected in the collection title aren’t listed as subjects).

Kramer-Smyth emphasized that these problems are fixable: she and others who use EAD as a data source can figure out how to write better code and ask repositories to submit “configuration files” that resolve data inconsistencies (e.g., by explaining local practices regarding quantity/extent information). However, it’s pretty plain that EAD still has a long way to go before it truly transcends institutional boundaries.

Michael Rush (Beineke Rare Book & Manuscript Library, Yale University), who heads is drafting the charge for the soon-to-be reconstituted EAD Working Group, is charged with revising EAD, provided a useful overview of the Working Group’s goals some of the changes that may be incorporated into EAD 3.0:
  • Reduction of mixed content, i.e., mixing of text and tags.
  • Allowing namespace interoperability, i.e., giving implementers to embed MODS, PREMIS, and other XML schemas directly into an EAD finding aid.
  • Improvement of data handling, e.g., getting rid of forward slashes, which are ignored by many programs.
  • Eliminating anything that doesn’t describe the records, e.g., the head and attribute labels used to mark scope and content notes; formatting info should be in stylesheets, not EAD schema!
  • Possibly removing table and list coding and recursive tags.
  • Reining in the diversity of practice, which is a political challenge: people do things a certain way because a given way meets a given need, but this diversity makes it harder to exchange data across institutions or pull EAD data into a database. In an effort to accommodate everyone, the Working Group might come up with a strict EAD and a loose EAD that allows greater diversity of practice.
The Working Group is seeking will need volunteers who will to steer the revision process; if you’re interested, contact him at michael.rush-at-yale.edu

Session chair Mark Matienzo (New York Public Library) then asked the panelists a really provocative question: should archivists should think of finding aids as documents or as data sources? All three panelists concurred that we need to start seeing finding aids as data sources from which documents, which still have many uses, can be produced as needed; conceptualizing finding aids as documents has led to many of the quirks and inconsistencies that become apparent anytime one looks at multiple institutions’ finding aids. As Michael Rush pointed out, we’ve moved beyond the point at which documents meet our needs. With MPLP and other developments, description is never done, and although we need the capacity to take a snapshot of a given description as it exists at a given point of time, we need to focus more on standardized creation of data over time.

All in all, a phenomenal session that brought to mind my own long-ago (and subsequently back-burnered) realization that the MARC format could be thought of as a highly flexible and repurposable information source, not just a cluster of templates organizing the presentation of various chunks of information. It also called to mind various past efforts to increase the consistency of MARC cataloging across institutions, most of which didn’t pan out. Here’s hoping that past experience, the profession’s increasing comfort and familiarity with databases, etc., and the emergence of new tools that make use of structured descriptive data make it possible to standardize descriptive practice in the EAD era.

Monday, September 1, 2008

SAA: Day three of sessions

Immediately after the conference ended on Saturday, a friend from North Carolina and I spent some time exploring Pier 39 and Golden Gate Park. I had to start packing as soon as I got back to my hotel, so I didn't get the chance to do any blogging. Most of this entry was written during a long layover in Chicago yesterday afternoon, but I wasn’t willing to pay the $7.00/hour fee for WiFi access at ORD and was simply too tired to post it last night.

Old Movies, New Audiences: Archival Films as Public Outreach Tools
I went to this session because a colleague of mine who isn’t here in San Francisco is overseeing the digitization of many of our audio, video, and motion picture holdings. Now that we’re starting to receive digital files from our vendor, we need to figure out not only how to manage them properly but also to make them widely accessible, so I’ll give her my notes when I get back.

I came in a bit late, so I didn’t get to hear all of Bill Moore’s presentation. However, I did get to learn a little bit about how the Oklahoma History Center, which has worked with the National Film Preservation Program (NFPF) to preserve some of its holdings, highlights its audiovisual holdings through community screenings, production of DVDs, and provision of footage to television and film producers; his repository, which has commissioned creation of a new score for a private film and can supply footage in formats required by professionals, has apparently managed to develop a substantial technical infrastructure.

Christine Paschild, formerly of the Japanese American National Museum (JANM), discussed how the JANM sought to make footage documenting life in prewar Japanese American communities and in World War II concentrations accessible to the K-12 educators who attend its summer curriculum institutes. JANM conducted focus groups with teachers and learned that the teachers wanted the ability to view snippets of footage, access footage without doing a lot of technological prep work, keyword searching, and subject headings that aligned with their lesson plans (i.e., geographic location, names of specific camps, topics such as family life and sports).

JAMN, which got NFPF funding, then worked with a vendor to digitize the footage, break it into short snippets, and create detailed descriptions of each snippet. The snippets are available online via the Nikkei Album, which also allows people who visit the site to comment upon the films and to add their own films, photos, lesson plans, etc. The Nikkei Album allows people to browse JAMN’s motion picture holdings and highlights the value of home movies to viewers and may as a result lead to increased preservation of such footage. JANM makes quite a bit of money through licensing, and Nikkei Album may enable producers to do more of their own searching.

JANM really lucked out in that it worked with a vendor willing to do all of the descriptive and editing work that it needed, and the editing and tagging took 4-5 times longer than JANM initially anticipated.

Paschild concluded by noting that the cataloging of materials on the Nikkei Album site doesn’t correspond to the cataloging of all of its other holdings and that this poses problems. However, the project also made JANM realize that access involved more than simply placing stuff on the Web: archives need to understand what kinds of description people need in order to make use of the material. Of course, Paschild isn’t alone in coming to this realization, and those of us specializing in archival description will likely spend the next decade or two coming to grips with its implications.

Snowden Becker of the Center for Home Movies focused on Home Movie Day, an annual event that began in 2003 and now is held in more than 60 cities on four continents; the next Home Movie Day will take place on October 18. Organizers of each Home Movie Day event invite local people with amateur film in their possession to have their film inspected, screened, and shown in a community setting. They are also encouraged to narrate their films, and audiences can often identify places, people, etc., depicted on the screen. Organizers get local businesses involved as sponsors and contributors and work with volunteers to secure equipment that enables film to be shown safely.

Becker argued that sponsoring a Home Movie Day event has a number of benefits for archivists and audiences alike. It’s an easy way for archivists to raise their repository’s profile (even though the focus will not be on existing holdings), allows staff to hone their identification, evaluation, and interviewing skills, and start identifying materials that they might wish to add to their collection. Moreover, Home Movie Day encourages audiences to recognize that home movies can be historically significant even if they don’t depict famous people or momentous events and to start learning about preservation of home movie footage and to become actively interested in preserving their own movies. Home Movie Day can also result in discovery of previously unrecognized personal or historical connections and bring together people with related interests.

Given my other commitments, there is no way I could organize a Home Movie Day. However, I really hope that someone else in my corner of upstate New York does so.

Game On: Leading Your Championship Team
The always amazing Rosemary Pleva Flynn was the solo presenter at this session, and she succinctly distilled a whole lot of business literature on team characteristics and dynamics and leadership styles and attributes. Pleva Flynn, who is keenly attuned to the managerial dimensions of archival practice, is absolutely right that a) archivists need to pay more attention to these issues and b) generally don’t have the time or, more importantly, the inclination to do so; even those of us who spend the bulk of our days supervising people and directing projects tend to see ourselves chiefly as archival practitioners.

Pleva Flynn’s presentation was really detailed, so instead of recapping it in detail, I’m simply going to point to the resources she identified as being particularly valuable:
Pleva Flynn gave us a lot to think about, and I plan to spend a lot of time mulling over the notes I took when I get home (and get some sleep) and make use of her guidance whenever I can.