The last day at AHA 2016 included two solid digital history sessions. The first, “Digital History and Digital Preservation Projects” was one of several sessions this week on digital histories of slavery. Last year’s Digital Histories of Slavery session was standing-room-only and overflowing with energy, so I understand why there was plenty of related programming this year. Of these, today’s session seemed most interesting to me since it focused less on analysis and more on how to grow and sustain these digital projects, something I’d be more likely to work on as a librarian.
David Eltkins of the eminent project started with a detailed discussion of the sites growth, including web analytics charts and statistics. According to Eltkins, showing these statistics to donors and grant organizations is essential to getting continuing funding. It was pleasing for me to see someone sharing web analytics too – I like to hear about how digital projects are connecting with wide audiences, instead of launching and calling it a day. With this continuing funding, Eltkins says that slavevoyages.org will add new data from new sources. Eltkins also discussed technical sustainability issues, like keeping dedicated servers and site code up to date and optimizing it for access.
Following Eltkins, Sean Kelley and Paul Lovejoy discussed the Studies in the Histories of the African Diaspora – Documents (SHADD) project, which gathers documents that present first-hand testimonies and voices of enslaved people born in West Africa. Kristin Mann followed by talking about some of her own research that uses the types of documents featured in slavevoyages.org and SHADD, demonstrating their practical applications. Last up was Jane Landers of Vanderbilt University, who has spent decades on a fascinating documentary project using Catholic Church records from Cuba and other Latin American countries. This project started in microform in the 1990s by using church records in Cuba and has expanded to CD-ROM and now online editions. Landers described rich, detailed; local records and her team’s heroic efforts to both preserve them and engage local communities.
The panel did an admirable job allowing enough time for a long question and answer session. The panel and audience discussed involving university archives, libraries, and IT staff in digital preservation. Vanderbilt is engaged in preserving Landers’ project, but Lovejoy’s lamented that York University does not provide such support. Eltkins issued a brutal wake-up call, saying that “anything you put on the Internet is temporary.” He suggested not just working with one university but getting multiple stakeholders invested so projects wouldn’t rely on just one source of support. Eltkins spoke with a tone of realism throughout the morning, saying that he wouldn’t have gotten involved in this work if he knew how much money it would require raising. Fortunately, he did get involved and has done some amazing work.
The second panel this morning was on the Text Encoding Initiative (TEI). Rather than discussing sustainability, this panel encouraged attendees to start new projects from scratch. Instead of an introduction to markup languages, the panel talked about ways to use this XML schema. (When asked, most attendees raised their hands to indicate existing knowledge of TEI or other XML.)
Stephanie Kingsley started the session talking about her research into the publishing history of James Fenimore Cooper’s <i>Mercedes of Castille</i>. By using the open-source Juxta Commons application, Kingsley could compare different editions of the book and see where certain editions had excised controversial passages. Following some examples with lolcat metadata (definitely a highlight of the day), Susan Garfinkel encouraged scholars to use TEI in addition to other XML schemas to explore the different uses of markup language. A primary source such as a diary, Garfinkel argues, is a dataset, and analyzing it as such can give us insight into the human mind.
Joseph Wicentowski of the U.S. State Department’s history office gave especially practical advice, suggesting that scholars check out free software such as eXist instead of learning xslt or purchasing software like Oxygen. Kathryn Tomasek concluded the session by talking about her own work marking up financial record books, as well as the TEI community’s work and future opportunities to connect TEI files to the semantic web with Resource Description Framework (RDF). Unfortunately, there was not much time for Q&A, because I would have asked about possible ways to automate TEI markup. The examples that the panel presented seemed to be quite time-consuming.
Both sessions today gave me optimism for the future of digital history. Rather than talking vaguely about “possibilities” or pilot projects, both sessions gave concrete examples of successful projects and offered practical advice. This indicates that digital history is maturing well, and I’m looking forward to seeing more projects like the ones I heard about today.