This posting will be the conclusion of the data visualization section. The three postings within this section are: Voyant, Kepler, and Palladio; all three are use computer algorithms to present data visually to the researcher or user. This post will have three parts: The first part compares the tools, and offers discussion of any insights that can be gleaned from this comparison. The second part focuses on how the tools complement each other in their effort to recreate the past. The last part, conclusion, is a brief discussion on the limits of these programs.
Part I: Comparison of Tools and Insight
Part 1a: Comparison of Tools
Simply, all of these tools specialize in data visualization. Meaning these programs do not make arguments or draw conclusions. They merely present the data sets and metadata that is inputted. However, that being said, these programs do present the data that would otherwise be hard to visualize on paper or in hard copy.
In presenting the data, there are three main virtualization and visualization methods that have been discussed over the past three blog postings: text analysis of Voyant, mapping with Kepler, and networking with the help of Palladio. All three programs used the WPA’s Federal Writers Project data of the slave narratives from the Library of Congress digital archive, found here. All three programs are only as good as the metadata given. Simply, if the metadata includes geospatial data then one is able to do a mapping project or even a network project. However, if the data only includes texts of interviews, then one would have to search the documents more clearly to compile a list of metadata – such as interview location and interview time.
Regarding location, in both mapping and network programs there is often a singular problem: the lack of an ability to overlay historical maps onto modern geo-locational data spots. Meaning, programs like Kepler and Palladio relay the data points onto current, modern-day city locations; however, this could be complicated if the city retains its name but moves location or if the city does not exist. Relaying a historical map over the current locational data points would allow for a more accurate reading of the location. An example of this difficulty regarding map locations is talked about in the project Mall History which attempts to place older, more historical geospatial data points onto a more modern map. This is a common concern for mapping and networking projects, placing the data in the correct location.
Next, in comparing the programs, it is clear that these are all web-based programs. They offer a great beginning guide to tools used in the field of digital humanities. These web-based programs are more than just user friendly, they offer an easy way to interact with the data. Taking mapping or networking programs like Palladio for example. Mapping and networking mapping projects used Leaflet J/S to produce a lite version of the data for easy reading on a mobile device. This points to the presentation aspect, discussed more in part 1b, of digital humanities projects.
Part 1b: Insight from Comparison
A major insight from comparing these programs is usability, interactivity, and presentation nature of these programs. Throughout these various postings thus far, one can start to see a common call for digital humanities project to be user friendly and interactive for the larger audience. An example of a mapping project that is displayed and designed for academic and non-academic audiences is the Stanford mapping project of the ancient Roman world, link. Elijah Meeks, technical lead, in a video created for the resources of the Roy Rosenberg Center for New Media at George Mason University testified to the fact that he was very much surprised as to all of the non-academic interactive the project was receiving. People of curiosity or creators of history-based games are using the site for non-academic research, and I believe this call for accessibility is one of the best insights that can gained when comparing these three digital tools.
Lastly, this comparison shows, I dare say, that any set of text corpus can be statistically rendered. If the metadata is gathered, and there is enough of it, then a text corpus can be simply uploaded and rendered into various programs. A vast majority of the work comes in when compiling the metadata and creating data sets for each data category.
Part II: Complement Each Other
I would argue that one would be hard pressed to classify one program as more interactive than another. While Voyant does have five categories for text analysis, it does not offer locational data visualization. However, both Kepler and Palladio do not focus on text, context, and linguistics; they look to statically, quantifiable locational data. In the end, I would conclude that each program fills the weakness of its sister programs. In other words, Voyant is the key program for text analysis, it offers the researcher insight into how the text and language is structured, but it does not offer the user the ability to map or compare interview subjects. However, this weakness is filled by Kepler and Palladio who both have mapping capabilities. Moreover, Kepler cannot create network graphs between non-geolocational data, Palladio can. Palladio can use data like gender of interviewee or data on the former salves’ labor location (field or house) within the plantation to create a network graph. Weaknesses of one program is filled by the strength of another.
Conclusion: Last Thoughts
In the end, we still need the archives to make the argument. The programs discussed here are only as strong as the data presented to them, the data compiled and curated by libraries and archives. The argument is in the documents themselves. Dr. Stephen Robertson of digitalharlem.org testifies to this very fact by saying, “The websites were designed as research tools: they make it possible to search and map our research. They do not offer any interpretation of that material other than the narratives associated with maps related to the lives of selected individuals and to particular topics in our research.” The programs are very helpful tools, yes, but they are just that – tools – not argument making or thesis creating machines for the digital humanities scholar. These programs are here to support the argument and research of the scholar, not to be the holy grail of all definitive conclusions on historical theses.