Menu Close

Category: SAA2006

Reflections on Blogging at SAA 2006

Mark A. Matienzo’s recent post (and its related comments) On what “archives blogs” are and what ArchivesBlogs is not over on thesecretmirror.com got me thinking about my experience of blogging SAA2006 again (as well as making me want to send out a special thank you to everyone for their kind words – as much as I am writing for myself, I will admit to being encouraged that there are others who find my posts worth reading).

Since there was no internet available in the rooms where the panels were held – I found myself taking notes on my laptop. 37 pages of notes later and sitting at home alone trying to convert those notes into coherent posts and I found it hard sometimes to not be overwhelmed. It was interesting to try and strike a balance between sharing the ideas the panelists had presented and including my own insights. I think what I ended up with was a decent mix – with the opportunity to include ideas about the connections among many of the panel topics, as well as other ideas and websites from outside the conference. On the downside – I never did finish writing up all the talks I took notes on. The scale of the task got to me – and realized that I had started to wish I could write about something else. So I did!

I do wonder how different my posts would have been if I could have posted them live. I think that I would have covered a greater breadth of speakers – but with a loss of depth. I would have had less opportunity to reflect on how the speakers talks connected with the rest of the archival world – especially those examples and other ideas I was able to link to as a result of my extra time.

I hope that we (ie, anyone who wants to try their hand at it) can coordinate a broader group of bloggers at SAA 2007 in Chicago, both to expose the ideas presented with those who could not attend as well as to permit further reflection on connections among all the new ideas that might otherwise be hard to share. The library community is ahead of us on this front. Take a look at the page for the Public Library Associations’ recent conference in Boston. This page gives people an easy link to view the posts from the PLA 2006 conference – while spreading the work among many keyboards. Perhaps there is a place for something like this in the future of archives conferences.

Question from the Archives of American Art and EAD talk (session 305)

At the end of the Extended Archival Description panel, someone in the audience asked if ColdFusion and ASP were used for the Archives of American Art project. The response was interesting. The answer was yes to ColdFusion and no to ASP. That wasn’t the interesting part. The part I was intrigued by was the reasons WHY they had used ColdFusion.

The developer on the project was there and stood to add his 2 cents. He said these were the reasons for the choice of ColdFusion:

  • The Smithsonian is not enthusiastic about open source software
  • The Smithsonian is not unfriendly towards ColdFusion
  • He knew ColdFusion very well

This immediately made me think of a recent post at Creating Passionate Users: When the “best tool for the job”… isn’t. In her post, Kathy Sierra talks about other factors to weigh when choosing a software tool to solve a problem OTHER than what is the best tool for the job based on the features of all the options. She proposes (in what she admits is a sweeping generalization) that enthusiasm for a tool be weighed more heavily than it’s pure appropriateness for the task when selecting which tool to use.

I am not saying that ColdFusion was necessarily the AAA developer’s first choice – but that it is interesting to remember that there are LOTS of different elements that go into choosing software to address the challenges at the intersection of archives and the internet. One of those things is simply the skills of the people you have to work on a project – and their enthusiasm for the tools at hand.

Session 305: Extended Archival Description Part I – Archives of American Art

Session 305 included perspectives from three digital collections which are trying to use EAD and meta data to solve real world problems of navigation and access. This post addresses the presentation by the first speaker, Barbara Aikens from the Archives of American Art at the Smithsonian.

The Archives of American Art (AAA) has over 4,500 collections focusing on the history of American art. They received a 3.6 million dollar grant from the Terra Foundation to fund their 5 year project. They had already been using EAD for their standard in online finding aids since 2004. They also had already looked into digitizing their microfilmed holdings and they believe that the history of microfilming at AAA made the transition to scanning entire collections at the item level easier than it might otherwise have been. So far they have digitized 11 full collections (45 linear feet).

Their organization of the digitized files was based on collection code, box and folder. Basing their template on the EAD Cookbook, AAA used Note Tab Pro to create their XML EAD finding aid. I wonder how they might be able to take advantage of the open source software tools being developed such as Archon and the Archivists’ Toolkit (if you are interested in these packages, keep your eye open for my future post looking at them each in detail). There was some mention of re-purposing DCDs, but I was not clear about what they were describing.

The resulting online finding aid lets you read all the information you would expect to find in a finding aid (see an example), as well as permitting you to drill down into each series or container to view a list of folders. Finally the folder view provides thumbnails on the left and a big image on the right. Note that this item level folder view includes very basic folder meta data and a link back to that folder’s corresponding series page. There is no meta data for any of the images of individual items. This approach for organizing and viewing digitized collections is workable for large collections. The context is well communicated and the user’s experience is very like that of going through a collection while physically visiting an archive. First you use the finding aid to location collections of interest. Next you examine the Series and or Container descriptions to location the types of information for which you are looking. Finally, you can drill down to folders with enticing names to see if you can find what you need.

As an experiment, I tested the ‘Search within Collections/Finding Aids’ option by searching for “Downtown Gallery” and for gallery artist files to see if I was given a link to the new Downtown Gallery Records finding aid. My search for “Downtown Gallery” instead directed me to what appears to be a MARC record in the Smithsonian Archives, Manuscripts and Photographs catalog. Two versions of the finding aid are linked to from this record – with no indication as to how they are different (it turned out one was an old version – the other the new one which includes links to the digitized content). A bit more experimentation showed me that the new online collection finding aids are not integrated into the search. I will have to remember to try this sort of searching in a few months to see what the search experience is like.

What I was hoping for (in a perfect world) would be highlighting of the search terms and deep linking from the search results directly to the series and folder description pages. I wonder what side effects there will be for the accuracy of search results given that the series/folder detail description page does not include all the other text from the main finding aid. (ie New Finding Aid vs New Finding Aid Series Level Page). Oddly enough – the old version of the finding aid for this same collection includes the folder level descriptions on the SAME page (with HTML anchors permitting linking from the side bar Table of Contents to the correct location on the page). So a search for terms that appear in the historical background along with the name of an artist only listed at the folder level WOULD return results (in standard text searching) for the old finding aid but not for the new one. Once the new finding aids are integrated into the search results – it would be very helpful to have an option to only return finding aids that include digitized collections.

While exploring the folder level view, I assumed that the order of the images in the folders is the original order in the analog folder. If so, then that is a fabulous and elegant way of communicating the original order of the records to the user of the digital interface. If NOT – then it is quite misleading because a user could easily assume, as I did, that the order in which they are displayed in the folder view is the original order.

Overall, this is exciting work – and shows how well the EAD can function as a framework for the item level digitization of documents. It also points to some interesting questions about how to handle search within this type of framework.

UPDATE: See the comment below for the clarification that the new finding aids based on the work described in this presentation are NOT online yet – but should be at the end of the month (posted: 08/09/2006). 

Session 510: Digital History and Digital Collections (aka, a fan letter for Roy and Dan)

There were lots of interesting ideas in the talks given by Dan Cohen and Roy Rosenzweig during their SAA session Archives Seminar: Possibilities and Problems of Digital History and Digital Collections (session 510).

Two big ideas were discussed: the first about historians and their relationship to internet archiving and the second about using the internet to create collections around significant events. These are not the same thing.

In his article Scarcity or Abundance? Preserving the Past in a Digital Era, Roy talks extensively about the dual challenges of loosing information as it disappears from the net before being archived and the future challenge to historians faced with a nearly complete historical record. This assumes we get the internet archiving thing right in the first place. It assumes those in power let the multitude of voices be heard. It assumes corporately sponsored sites providing free services for posting content survive, are archived and do the right thing when it comes to preventing censorship.

The Who Built America CD-ROM, released in 1993 and bundled with Apple computers for K-12 educational use, covered the history of America from 1876 and 1914. It came under fire in the Wall Street Journal for including discussions of homosexuality, birth control and abortion. Fast forward to now when schools use filtering software to prevent ‘inappropriate’ material from being viewed by students – in much the same way as Google China uses to filter search results. He shared with us the contrast of the search results from Google Images for ‘Tiananmen square’ vs the search results from Google Images China for ‘Tiananmen square’. Something so simple makes you appreciate the freedoms we often forget here in the US.

It makes me look again at the DOPA (Deleting Online Predators Act) legislation recently passed by the House of Representatives. In the ALA’s analysis of DOPA, they point out all the basics as to why DOPA is a rotten idea. Cool Cat Teacher Blog has a great point by point analysis of What’s Wrong with DOPA. There are many more rants about this all over the net – and I don’t feel the need to add my voice to that throng – but I can’t get it out of my head that DOPA’s being signed into law would be a huge step BACK for freedom of speech and learning and internet innovation in the USA. How crazy is it that at the same time that we are fighting to get enough funding for our archivists, librarians and teachers – we should also have to fight initiatives such as this that would not only make their jobs harder but also siphon away some of those precious resources in order to enforce DOPA?

In the category of good things for historians and educators is the great progress of open source projects of all sorts. When I say Open Source I don’t just mean software – but also the collection and communication of knowledge and experience in many forms. Wikipedia and YouTube are not just fun experiments – but sources of real information. I can only imagine the sorts of insights a researcher might glean from the specific clips of TV shows selected and arranged as music videos by TV show fans (to see what I am talking about, take a look at some of the video’s returned from a search on gilmore girls music video – or the name of your favorite pop TV characters). I would even venture to say that YouTube has found a way to provide a method of responding to TV, perhaps starting down a path away from TV as the ultimate passive one way experience.

Roy talked about ‘Open Sources’ being the ultimate goal – and gave a final plug to fight to increase budgets of institutions that are funding important projects.

Dan’s part of the session addressed that second big idea I listed – using the internet to document major events. He presented an overview of the work of ECHO: Exploring and Collecting History Online. ECHO had been in existence for a year at the time of 9/11 and used 9/11 as a test case for their research to that point. The Hurricane Digital Memory Bank is another project launched by ECHO to document stories of Katrina, Rita and Wilma.

He told us the story behind the creation of the 9/11 digital archive – how they decided they had to do something quickly to collect the experiences of people surrounding the events of September 11th, 2001. They weren’t quite sure what they were doing – if they were making the best choices – but they just went for it. They keep everything. There was no ‘appraisal’ phase to creating this ‘digital archive’. He actually made a point a few minutes into his talk to say he would stop using the word archive, and use the term collection instead, in the interest of not having tomatoes thrown at him by his archivist audience.

The lack of appraisal issue brought a question at the end of the session about where that leaves archivists who believe that appraisal is part of the foundation of archival practice? The answer was that we have the space – so why not keep it all? Dan gave an example of a colleague who had written extensively based on research done using World War II rumors they found in the Library of Congress. These easily could have been discarded as not important – but you never know how information you keep can be used later. He told a story about how they noticed that some people are using the 9/11 digital archive as a place to research teen slang because it has such a deep collection of teen narratives submitted to be part of the archive.

This reminded me a story that Prof. Bruce Ambacher told us during his Archival Principals, Practices and Programs course at UMD. During the design phase for the new National Archives building in College Park, MD, the Electronic Records division was approached to find out how much room they needed for future records. Their answer was none. They believed that the speed at which the space required to store digital data was shrinking was faster than the rate of growth of new records coming into the archive. One of the driving forces behind the strong arguments for the need for appraisal in US archives was born out of the sheer bulk of records that could not possibly be kept. While I know that I am oversimplifying the arguments for and against appraisal (Jenkinson vs Schellenberg, etc) – at the same time it is interesting to take a fresh look at this in the light of removing the challenges of storage.

Dan also addressed some interesting questions about the needs of ‘digital scholarship’. They got zip codes from 60% of the submissions for the 9/11 archive – they hope to increase the accuracy and completeness of GIS information in the hurricane archive by using Google Maps new feature to permit pinpointing latitude and longitude based on an address or intersection. He showed us some interesting analysis made possible by pulling slices of data out of the 9/11 archive and placing it as layers on a Google Map. In the world of mashups, one can see this as an interesting and exciting new avenue for research. I will update this post with links to his promised details to come on his website about how to do this sort of analysis with Google Maps. There will soon be a researchers interface of some kind available at the 9/11 archive (I believe in sync with the 5 year annivarsary of September 11).
Near the end of the session a woman took a moment to thank them for taking the initiative to create the 9/11 archive. She pointed out that much of what is in archives across the US today is the result of individuals choosing to save and collect things they believed to be important. The woman who had originally asked about the place of appraisal in a ‘keep everything digital world’ was clapping and nodding and saying ‘she’s right!’ as the full room applauded.

So – keep it all. Snatch it up before it disappears (there were fun stats like the fact that most blogs remain active for 3 months, most email addresses last about 2 years and inactive Yahoo Groups are deleted after 6 months). There is likely a place for ‘curitorial views’ of the information created by those who evaluate the contents of the archive – but why assume that something isn’t important? I would imagine that as computers become faster and programming becomes smarter – if we keep as much as we can now, we can perhaps automate the sorting it out later with expert systems that follow very detailed rules for creating more organized views of the information for researchers.

This panel had so many interesting themes that crossed over into other panels throughout the conference. The Maine Archivist talking about ‘stopping the bleeding’ of digital data loss in his talk about the Maine GeoArchives. The panel on blogging (that I will write more about in a future post). The RLG Roundtable with presentations from people over at InternetArchive and their talks about archiving everything (ALSO deserves it’s own future post).

I feel guilty for not managing to touch on everything they spoke about – it really was one of the best sessions I attended at the conference. I think that having voices from outside the archival profession represented is both a good reality check and great for the cross-polination of ideas. Roy and Dan have recently published a book titled Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web – definitely on my ‘to be read’ list.

Overall Conference Impressions

I went to many sessions at the 2006 Joint Annual Meeting of NAGARA, COSA, and SAA and will add more presentation posts over the course of the next two weeks. I have 37 pages of notes in MS Word – though there is lots of white space throughout as I made bullet lists and started new pages for new presentations as I went. And some of my notes are on paper (darn that laptop battery). My first three pages of notes translated into the 3 posts I have put up so far summarizing and commenting on sessions – so I suspect it will take me a while to work my way through them. Combine that with all the ideas generated in conversations with fabulous people or that occurred to me during presentations and I have no fear about running out of ideas for posts here anytime soon.

I presented my poster “Communicating Context in Online Collections” throughout the morning on Friday. I enjoyed speaking with everyone who stopped by to get the long version of what my ideas on my poster were all about. Another plan I have is to post a version of my poster along with a full list of links to the websites I used as examples on my poster – look for it before the end of August.

My past experiences with conferences are from the technical world – I have been to and presented at more than one Oracle Open World conference. These are huge monstrous affairs which take over large city convention centers. While my first few minutes at this conference was a slightly overwhelming throng of people I didn’t know, I rapidly found people I knew and met many new people.

Being used to high tech conferences I was surprised by the lack of internet access which, while slightly frustrating for attendees, was quite mysterious in the context of presenters. No live demos of project websites or of the software many were discussing. Everyone worked around it (most had come prepared with screen shots of what they wanted to show) – it just seemed very strange.

There are some poster related things I would put on my wishlist to change for next year (speaking as a student who has never attended an SAA conference before):

  • opportunity to assemble my poster during non-session time
  • please take into account that most posters seem to be arranged in ‘landscape’ layout rather than ‘portrait’ and provide enough space for them all
  • more room for presenters to stand in front of their posters (there were great challenges this year with the placement of a buffet brunch table 2 feet in front of a long row of posters precisely during one of the main assigned poster presentation times)
  • either clear indication of when to pick up posters (again, not during session time) – or someone to take the posters to safety so they don’t end up in a pile at the back of the exhibit hall as they did this year

A big thank you to everyone I met at the conference. You made my first experience in the ‘greater archival universe’ (aka, beyond the University of Maryland) a good one. More SAA2006 posts and supporting information related to my poster coming soon.