A Library Intern’s Maiden Voyage through Digital Publication in the Antarctic

This post was authored by Suzanne Raybuck, Intern with the Office of Digital Research and Scholarship in the Fall of 2019. Suzanne recounts her experience working with Special Collections Materials and creating a digital publication interface to display it online. The final version is not yet live, but this post contains previews of the interface.

A Hercules emergency oil lantern from Operation Deep Freeze.

When I originally was brought on as the Digital Publication Intern for the Office of Digital Research and Scholarship, I had virtually no concept of what I would be doing in my new internship position. But, very early on I knew that I wanted to work with the Robert E. Hancock Jr. Collection at FSU Special Collections. The Hancock Jr. Collection is a collection that “contains materials regarding military operations in the Antarctic, primarily focusing on the Operation Deep Freeze II mission.” Based on that description, it’s a safe conclusion to assume it contains lots of important and scholarly documents and artifacts. However, it also contains various memorabilia from Robert E. Hancock Jr.’s time in Antarctica (including many, many, tiny penguin figurines, a drawing of Mickey Mouse shaking a penguin’s hand, model navy destroyer ships, lumps of coal, emergency lanterns, and military rations). This wonderful collection of artifacts is endlessly fascinating because it provides a series of vignettes of life at the South Pole in the form of really fun random objects.

A model of a wind-class ice breaker ship USCGC Southwind which participated in Operation Deep Freeze.

I found this collection while searching through Special Collections for a fun series of documents to use as guinea pigs for a new publication system we were testing. Essentially, I needed a bunch of documents in similar formats that we could transform into digital objects and then use to test out different publication tools. After spending maybe an hour with a variety of fun models and pictures, I found the Operation Deep Freeze Newsletters nestled into a box of other periodicals from Antarctica. The Newsletters were published by the army to send to the families of servicemen who were in Antarctica to let them know the news from the various bases. The Newsletters were mostly written by incredibly bored servicemen just trying to pass the time in their freezing posts. This boredom resulted in the inaugural newsletter detailing the long and involved process of how a band of grizzled soldiers tried to hatch live chicks from commercial eggs for the upcoming Easter Holiday. I had definitely found my guinea pig documents.

The original front page of Volume 1 Issue 1 of the Operation Deep Freeze Newsletter.

After finding the newsletters, I was tasked by our Digital Humanities Librarian, Sarah Stanley, with first encoding these newsletters in a data-rich .xml format called the Text Encoding Initiative, or TEI, and then figuring out how to publish them online. To accomplish this, we had to take into consideration three key factors: maintaining the format of the newsletters, good display functionality (e.g. tables of contents, hyperlinks, page view/scroll view), and how easy it would be to use. With these in mind, I started trying out different publication methods such as eXist-db’s TEI Publisher, which proved to be a challenging introduction into digital publishing.

eXist-db is an XML database tool that can be used to build web applications. We used the TEI Publisher package to create a digital collection that would use our TEI data format and present it in a clean and simple interface. The process of generating an application was intricate and required lots of specialized knowledge of both TEI files and their accompanying customization files. Additionally, we had no idea how the digital edition would look before we generated an application and viewed it, so if some small part of the display of the edition was off, we would have to delete the app, minimally adjust our code and generate a new app from the very beginning. Once we did get a finalized version generated, the overall look and feel of the page was exactly what we had hoped: very clean and easy to read. However, because we were using a program to generate the app for us, we had a very limited capacity to tweak the website interface and design or add our own custom parts to the whole thing. Ultimately, the the functionality of eXist-db did not quite meet our needs, and we tried to find a solution that would let us get a bit more hands-on with our edition.

A screencap of the eXist-db interface we created, its very clean and easy to navigate but at least four iterations of apps went into getting this particular layout.

Another possible publication tool didn’t arrive until the next semester, when I was working on publishing a collection of poetry translations online. Sarah pointed me towards a Jekyll (static website generator) template for minimal editions called “ed”. After looking at the examples, the display was again very clean and easy to interact with, so we decided to give it a shot. After deploying some quick test sites, we found that it was incredibly easy to work with and consistently generated beautifully designed websites that intuitively displayed our editions. It also had a built-in search function and annotation, which we were looking for in our poetry project. The only problem was we had to translate our TEI format into markdown, which caused us to lose huge amounts of metadata and information about textual styling that would be useful to other researchers. We made a judgement call and decided to keep looking for something that would preserve our format while giving us all the functionality and display options that we found with ed.

A screenshot of our test site for ed. Most of the sample texts we used were poems from Wilfred Owen, hence the name. Here you can see that the layout is slightly different since ed automatically creates larger title text. Unfortunately, we had to change all our TEI files to markdown, which got rid of most of our metadata.

The final option we looked at was a JavaScript library called CETEIcean, which takes TEI files and translates it directly into HTML. With a single script added to any existing HTML page, we could take our TEI files and easily publish them. Again, we started making some test pages and playing with the code and quickly ran into a problem. Because CETEIcean is just a JavaScript library, it doesn’t automatically build websites for you like with existdb and ed. If we used CETEIcean, we would have to make every single page on our website from scratch, repeating tons of HTML and JavaScript along the way. Sarah was enthusiastic about using CETEIcean since it did arguably check all our boxes, but I wanted to find a more efficient way.

In the end, we settled on using a combination of CETEIcean and ed along with chunks of original code to create our own web application which we named Pilot: Publishing Interface for Literary Objects in TEI¹. We essentially used the quick and intuitive page generation from ed, the javascript transformation of TEI from CETEIcean and mixed it together all running on a node.js server. Because we made Pilot from scratch, we can include or add all the functionality we want such as annotation, interactivity, and variant readings of the base newsletters.

A screenshot of what the first draft of our Pilot interface looks like. This page was automatically generated by the server file after reading a folder of TEI files, transforming them to HTML, and finally running them through three templates to get the desired display.

Though this project was long and frustrating, it ended up teaching me one of the most important points of digital publishing: digital representation of texts adds to the work, rather than merely representing it. Digital publishing is at a unique intersection where we have to negotiate the appearance of the facsimile, the functionality the editors want, and the demands of a digital medium. With all of these competing agendas, it’s hard to remember that a digital edition is a creative opportunity. With the vast array of tools offered by the web, developers can take advantage of things like interactive elements, user input, and different types of media to create editions that can only exist in digital spaces. In a way, digital editions represent a new kind of edition that acts more like an archive; where researchers can explore a digital space to find artifacts that are curated through organization and interface.

We plan for our iteration of the Newsletters in Pilot to allow for full-text searching, public annotation, different readings, and interactive displays. With these new features, we hope that the Newsletters will be read and understood in entirely different ways than their paper counterparts, and allow readers to interact with such an engaging yet little known collection.

Notes

¹ As an homage to CETEIcean (a pun on “cetacean,” which means “of or relating to whales”), we decided to keep with the whale theme and name our project after the pilot whale.

Using R on Early English Books Online

In order to follow along with this post you will need:

  1. Basic knowledge of the Text Encoding Initiative guidelines for marking up texts.
  2. Understanding of the structure of XML and the basics of XPath.
  3. Some experience with Regular Expressions is helpful, but not necessary.
  4. A willingness to learn R!

A few months ago, I started working through Matt Jockers’ Text Analysis with R for Students of Literature. I wanted to improve my text analysis skills, especially since I knew we would be acquiring the EEBO-TCP phase II texts, which contain text data for thousands of early modern English texts (if you are an FSU student or faculty member and you want access to these files, email me). To start, I decided to do some analysis on Holinshed’s Chronicles, which are famous for their impact on Shakespeare’s history plays. While I have been able to create a few basic analyses and visualizations with this data, I’m still learning and expanding my understanding of R. If you ever want to work through some of the ins-and-outs (or would prefer an in-person consultation on R), you should attend the Percolator from 3-5 on Wednesdays in Strozier or email me to schedule a consultation. We will also be holding a text analysis workshop from 10-11 on April 14.

I am going to be working from two of the EEBO TCP phase I texts, since these are currently open access. You can download the entire corpus for phase one in SGML format: https://umich.app.box.com/s/nfdp6hz228qtbl2hwhhb. I’ve used a stylesheet generated by the TEI council to transform the files into TEI P5-compliant XML files. You can get the example files on my GitHub page (along with the finalized code). Alternately, you can get all of the P5-compliant TEI files directly from the Text Creation Partnership Github.

If you want to follow along with this blog post, do the following:

Step 1. Get your texts. Go to my GitHub page and download holinshed-v1.xml and holinshed-v2.xml. Put them in a directory that you can easily find (I have mine on my desktop in a directory called “holinshed” within another directory called “eebo_r”).

Step 2. Download R and R Studio, as outlined in our Text Analysis libguide.

Step 3. Set Working Directory. Open R Studio, and type setwd(“”), where the path to the folder you created is contained within the quotes. On a Mac, your path will likely look something like this:

setwd("~/Desktop/eebo_r")

And on Windows it will look something like:

setwd("C:/Users/scstanley/Desktop/eebo_r")

(Note that you shouldn’t use a “\” character for windows filepaths, even though that is standard. Forward slashes are considered escape characters in R.)

You can either type this into the script pane or in the console. My script pane is on the top-left, but yours may be somewhere else within your RStudio Environment. If you are on a Mac, hit “ctrl+enter” Note: I am using the script pane to edit my code, and hitting ctrl + enter to have it run in the console. If you just want to run your code in the console without saving it as a script, you can type directly into the console.

Step 4. Install the XML and Text Mining packages. Go to Tools > Install Packages and type “XML” (all uppercase) into the Packages text field. Click “Install.” Do the same with “tm” (all lowercase). You could also enter install.packages(“tm”) and install.packages(“XML”) into your console with the same effect.

Step 5. Now that you have the XML and text mining package installed, you should call them into the session:

library(XML)
library(tm)

Again, hit ctrl+enter. 

Now you’re ready to get started working with R!

Remember from the beginning of this post that I created a directory within my working directory (“~/Desktop/eebo_r”) to store the files I want to analyze in. I called this directory “holinshed”. I am going to create an object called `directory` that references that filepath. To do this, I’m going to use an assignment operator (`<-`). This gets used quite frequently in R to assign some more complex or verbose object another name. In this case, we will say:

directory <- "holinshed"

Now, we want to get all of the files within that directory: 

files <- dir(path=directory, pattern=".*xml")

This line of code sets another object called “files” which follows the directory we set with the “directory” object, and finds all of the objects within that directory that end in “.xml” (all of the XML files).

This is where things can get a little confusing if you don’t understand XML and XPath. For a basic overview, you can take a detour to my presentation on TEI from the Discover DH workshop series, which contains an overview of XML.

What you will need to know for this exercise is that XML structures are perfectly nested and hierarchical, and you can navigate up and down that hierarchy using a XPath. If XML is like a tree, XPath is your way of moving up and down branches to twigs, jumping to other branches, or going back to the trunk.

For the purposes of this assignment, I am interested in specific divisions within Holinshed’s Chronicles—specifically, the ones that are labelled “chapter” and “section” by the encoders of the EEBO-TCP texts. The way that I would navigate from the root of the document to these two types of divisions is with the following XPath:

/tei:TEI//tei:div[@type='chapter'] | /tei:TEI//tei:div[@type='section']

(find me all the divisions with a value of “chapter” on the type attribute AND find me all the divisions with the value of “section” on the type attribute.)

Out of the box, R could not parse XPath, but the XML package that you installed at the beginning will allow you to select only those pieces from your documents.

Now we need to get the  XML content out of the two files in our “holinshed” directory. To do this, we will need to create a for loop. To start, create an empty list.

documents.list <- list()

This gives us a place to store the objects when the for loop finishes, and goes back to the beginning. Without the empty list, the content will just keep overwriting itself, so at the end you will only have the last object. So for example, I made the mistake of not creating an empty list while creating my for loop, and I kept only getting the divisions from the second volume of Holinshed’s Chronicles, since the second volume was overwriting the first.

Our for loop is now going to take every file in the “holinshed” directory and do the same thing to it. We begin a for loop like this:

for(i in 1:length(files)){
#the rest of the code goes here

This basically says for every object in 1 to however long the “files” object is (in this case “2”), do the following. Also, note that the pound sign indicates that that line is a comment and that it shouldn’t be processed as R code.

Now, within this for loop, we are going to specify what should be done to each file. We are going to create a document object using `xmlTreeParse` for each object within the “holinshed” directory.

document <- xmlTreeParse(file.path(directory, files[i]), useInternalNodes = TRUE) 

(If you find it hard to read long code on one line, you can put carriage returns. Just make sure that the returns happen at a logical place (like after a comma), and that the second line is indented. Spacing and indentation do matter in R. Unfortunately, WordPress isn’t allowing me to provide an example, but you can see how that would look in practice in the example R file provided in my eebo_r GitHub repository.)

The [i] in “files[i]” will be be where the numeric information will be stored on each loop. So the first loop will be files[1] and the second will be files[2] (which correspond to “holinshed-v1.xml and holinshed-v2.xml). If we had more than two xml files in this directory, the for loop would apply to all of those as well.

Next, you will use the empty list that you have created. Define each of the documents.l that corresponds to files[1] or files[2] (holinshed-v1.xml and holinshed-v2.xml, respectively) as being the nodeset that follows the XPath we created above. In other words, create a list of all of the divisions with a value on @type of “chapter” or “section” within each document.

documents.list[[files[i]]] <- getNodeSet(document, "/tei:TEI//tei:div[@type='chapter'] | /tei:TEI//tei:div[@type='section']", namespaces = c(tei="http://www.tei-c.org/ns/1.0"))

Ignore namespaces for now. They are important to understanding XML, but as long as you don’t have documents that contain multiple XML languages, you won’t need to worry as much about it. I can discuss the function and importance of namespaces in another post.

So, in the end, your full for loop will look like this:

for(i in 1:length(files.v)){
   document <- xmlTreeParse(file.path(directory, files.v[i]), useInternalNodes = TRUE)
   documents.l[[files.v[i]]] <- getNodeSet(document, "/tei:TEI//tei:div[@type='chapter'] | /tei:TEI//tei:div[@type='section']", 
        namespaces = c(tei="http://www.tei-c.org/ns/1.0"))
}

If you want to run multiple lines of code, you can highlight the entire for loop, and hit “ctrl+enter.” Alternately, you can put your cursor at the beginning of the for loop in the script pane, and click “option+command+E” on a mac, or go to the menu and click “code > run region > run from line to end” to run from that line to the end of the script. This is also useful if you ever save an R script and want to come back to it later, and start from where you left off. This way you don’t need to go back and run each line individually.

Now you should have a list with two items. Each item on this list is a node set (which is a specialized type of list). Rather than having documents.l being two nested lists, I want to convert each document into its own list. I did it with the following code. See if you can figure out what exactly is happening here:

holinshed1.l <- documents.l[[1]] 
holinshed2.l <- documents.l[[2]]

Now that I have two separate lists for each document, I want to concatenate them into a single, list of divisions. In R, you use `c` to concatenate objects:

both.documents <- c(holinshed1.l, holinshed2.l)

Now, if you check `length(both.documents)`, you should get 359. Your console will look like this

> length(both.documents)
359

Basically, what this means is that there are a total of 359 divisions in both documents that have a value on type of either “chapter” or “section.”

Now, you are going to want to return all of the paragraphs that are children of these two divisions.* To do this, we are going to need to create another for loop. This time, instead of creating an empty list, we will create an empty vector. I’m going to call this vector paras.lower.

paras.lower <- vector()

I’m going to give you the full code for selecting the contents (text, basically) of all of the paragraphs, and then explain it point-by-point after.

for(i in 1:length(both.documents)){
   paras <- xmlElementsByTagName(both.documents[[i]], "p")
   paras.words.v <- paste(sapply(paras, xmlValue), collapse = " ")
   paras.lower[[i]] <- tolower(paras.words.v)

This says for every object in 1 to the length of “both.documents” (which we determined was equivalent to 359 divisions), do the following:

Create an object called “paras” which will select all of the children of the node set “both.documents” with the tag name of “p.” On each loop, do this for one division within both.documents.

Now create another object (this time a vector), that essentially takes the content of paras (the text within all the <p> elements, stripping the nested tags) and collapses it into a vector.

Now take the vector you’ve created (all of the words from each paragraph within each division) and make the characters all lowercase.

This process may seem slightly confusing at first, especially if you are unfamiliar with what each piece is doing. If you are ever confused, you can type ?term into the console, and you will find the documentation for that specific aspect of R. So, for example, if you typed ?sapply, you’d see that sapply applies a given function over a list or vector (so essentially the same thing happens to multiple objects within a vector or list, without you needing to explicitly state what happens to each item).

Now that you have your character vector with the content of all of the paragraphs, you can start cleaning the text. The one problem is that paras.lower.v contains multiple vectors that need to be combined into one. You can do this by using the paste() function we used in the last few lines.

holinshed.all <- paste(paras.lower, collapse=" ", sep="\n") 

Now, if we ask for the length of holinshed.all, we see that it returns 1, instead of 359.

Now, we are going to use the tm package that we installed at the beginning. This package can facilitate a lot of types of analysis that we won’t cover in this post. We are going to simply use it to easily remove stopwords from our texts. Stopwords are commonly-occurring words that we may not want to include in our analysis, such as “the”, “a”, “when”, etc.

To do this, you are first going to create a corpus from your holinshed.all vector:

holinshed.corpus <- Corpus(VectorSource(holinshed.all))

Now you will remove stopwords from this corpus. You can use the following code to remove all English stopwords:

holinshed.corpus = tm_map(holinshed.corpus, removeWords, stopwords("english"))

However, with a corpus this big, R will run very slow (it will likely take upwards of 10 minutes to remove all the stopwords from your corpus). If you want to let it run and take a break here, feel free to do so. However, if you are impatient and would prefer to continue on right now, I have a premade text corpus in my R GitHub repository, which you can use instead of following the next step.

If you do want to remove the stopwords by yourself, run the above code, grab yourself a cup of coffee, work on some other writing projects for a bit, take a nap—whatever suits you best. Once the stopwords are removed, you will see a “>” once again in your console, and you can then type in

writeCorpus(holinshed.corpus, filenames ="holinshed.txt")

This will create a file that has all of the content of the paragraphs within the <div>s with the type value of “chapter” or “section” minus the stopwords.

**Impatient people who didn’t want to wait for the stopwords to get removed can start up again here**

Now that you have a text file with all of the relevant words from Holinshed’s Chronicles (holinshed.txt), we are going to analyze the frequencies of words within the corpus.

We are going to use the scan() function to get all of the characters in the Holinshed corpus.

holinshed <- scan("holinshed.txt", what="character", sep="\n")

This line of R will create an object called “holinshed” which contains all of the character data within holinshed.txt (the corpus you just created).

You will once again need to use the “paste” function to collapse all of the lines into one (as the line of code above separated the documents on each new line).

holinshed <- paste(holinshed, collapse=" ")

Now you will split this very long line of characters at the word level:

holinshed.words <- strsplit(holinshed, "\\W") 

This splits the strings of holinshed at the level of the word (“\\W”). If you attempt to show the first 10 items within holinshed.words (`holinshed.words[1:10]`), you will notice that it gives you a truncated version of the whole document, and then 9 NULLs. This is because strsplit converts your vector into a list, and then treats the whole document like the first item on that list. Using unlist(), we can create another character vector:

holinshed.words <- unlist(holinshed.words)

Now, if you enter `holinshed.words[1:10]`, you will see that it returns the first 10 words… but not quite. You will notice that there are a number of blank entries, which are represented by quote marks with no content. In order to remove these, we can say:

holinshed.words <- holinshed.words[which(holinshed.words!="")]

Now, if you enter holinshed.words[1:10], it will display the first 10 words:

[1] "read"     "earth"    "hath"     "beene"    "diuided"  "thrée"  
[7] "parts"    "euen"     "sithens"  "generall" 

In order to get the frequencies of the words within our corpus, we will need to create a table of holinshed.words. In R, this is incredibly simple:

holinshed.frequencies <- table(holinshed.words) 

Now, if you enter length(holinshed.frequencies), R will return 37086. This means that there are 37,086 unique strings (words) within Holinshed’s Chronicles. However, if you look at the first ten words in this table (`holinshed.frequencies[1:10]`), you will see that they are not words at all! Instead, the table has also returned numbers. Since I don’t care about numbers (you might, but you aren’t writing this exercise, are you?), I’m going to remove all of the numbers from my table. I determined that we start getting actual alphabetic words at position 895. So all you need to do is redefine holinshed.frequencies as being from position 895 to the end of the document.

holinshed.frequencies <- holinshed.frequencies[895:37086]

Now you can sort this frequency table so that the first values of the table are the most frequent words in the corpus:

holinshed.frequencies.sort <- sort(holinshed.frequencies, decreasing = TRUE)

Now, if you enter `holinshed.frequencies.sort[1:10]` to return a list of the most often used words in our Holinshed corpus.

If you want a graphic representation of this list, you can plot the top twenty words (or 15 or 10):

plot(holinshed.frequencies.sort[1:20])

This graph should show up in the right pane of your RStudio environment (unless you have it configured in a different way), and will show you a visual representation of the raw frequencies of words within our corpus.

Try it on your own!

  1. We analyzed the top 20 words for the two combined volumes of Holinshed’s Chronicles, but what would our top 20 words look like if we analyzed each text individually?
  2. If you look closely at the XML, you will notice that our original XPath (/tei:TEI//tei:div[@type=’chapter’] | /tei:TEI//tei:div[@type=’section’]) excludes a lot of content from the Chronicles. Specifically, it ignores any division without those type attributes. Further, using `xmlElementsByTagName` only selects the direct children of the node set, which excludes paragraphs that occur within divisions nested within chapters or sections (see, for example `<div type=”part”>`, which occurs occasionally within `<div type=”chapter”>` in volume I). Write code that selects the contents of all paragraphs.
  3. Words in the top 20 list like “doo,” “haue,” and “hir” would presumably be picked up by a stopwords list, if they had been spelled like their modern English equivalents. How could you get rid of a few of these nonstandard stopwords?

Check back to my eebo_r GitHub page for additional R exercises and tutorials using the EEBO-TCP corpus! And if you have any questions about this post or want to learn more about R, schedule a consultation with me.

Notes

* I specifically don’t say that you are looking for all the paragraphs within these divisions, because the code we are about to use only selects children, not descendants. Understanding the difference between these requires some knowledge of XPath and the structure of XML documents.

 

Invisible Work, Fungible Labor

With the approaching Symposium on Invisible Work in the Digital Humanities, I’ve been thinking increasingly about my transition from graduate work in a “traditional academic department” to working in a library. As a graduate student, I was aware of the fact that my work was rendered invisible by the fact that it was often not treated as work. Indeed, until very recently, graduate assistantships at private universities were not treated as real employees. And often graduate students are ineligible to become PIs on grants, or receive other opportunities that would allow them to advance in the field. Central to the idea that graduate students don’t “do real work” is the idea that their labor and research is somehow secondary or derivative of “real work” done by faculty. Even in the digital humanities, graduate labor is figured as research assistantships, project management positions, and coordination.

The issue of “centrality” in a research project (especially a funded research project in which there are “principal investigators”) is a problem for DH researchers in libraries as well as for graduate students. As a recent article in Digital Humanities Quarterly entitled “Student Labour and Training” points out, graduate student research outputs often come in the form of less academically viable formats (like blog posts and social media). The authors note that students’ “lack of involvement in the dissemination of project outcomes […] prevents both students and the academic field as a whole from seeing student research as tantamount to faculty research.” Arguably, the traditional outputs of conference papers and single- or co-authored publications allow students more room to diverge from the PI’s stated goals for the project. The idea that students could be writing and generating scholarly products that expand upon, rather than simply feed into, a faculty members’ stated goals is somewhat jarring in an academic landscape. To many, graduate students are apprentices rather than budding practitioners in their own right.

As I moved into the realm of practitioner (in the sense that I was considered a valid employee by FLSA and NLRA), I began to realize that, while some issues of labor disappeared, the issue of centrality to research remained. I have had the good fortune to work in a library that is open to exploring digital scholarship, and has indeed encouraged my efforts in the digital humanities. Yet, there is a still-persistent underlying question about the utility of some of the work I have done: “How are you serving the existing needs of the scholarly community?” Often, especially when new initiatives have been posed, the immediate question has been “Have you done a climate survey?” or “What are the preexisting needs of the campus community?” My reaction to this sentiment has been similar to that of Dot Porter’s to the OCLC report “Does Every Research Library Need a Digital Humanities Center”:

It is galling for these professionals to be told, as they are in the OCLC report, that “the best decision is to observe what the DH academics are already doing and then set out to address gaps,” and “What are the DH research practices at your institution, and what is an appropriate role for the library? What are the needs and desires of scholars, and which might your library address?” and especially “DH researchers don’t expect librarians to know everything about DH, and librarians should not presume to know best [my italics].” What if the librarians are the DH researchers? What if we do, in fact, know best? Not because we are brilliant, and not because we are presumptuous, but because we have been digital humanists for a while ourselves so we know what it entails?

I understand the impulse from librarians to take their cues from researchers in more “traditional” academic departments, especially considering the fact that library and information science is considered a social science, where climate surveys, environmental scans, and other such methodologies are common. However, the fact is that in the context of digital humanities, librarianship and information science as disciplines have greatly influenced the types of intellectual work that is being done in the field. To artificially remove this influence from the equation is a disservice both to librarians and to potential collaborators.

Part of this problem comes back to the issue of “centrality” I mentioned with graduate work. Acting as if the library’s (or a librarian’s) goals should be derived from the goals of faculty limits the potential impact of scholarship from librarians, either through limiting the media or venue through which it can be disseminated or limiting the findings it is allowed to make. And it’s not just the idea that librarians should be in service to faculty; it’s the idea that libraries (as organizations) generate priorities based on faculty priorities, which then filter seamlessly down to the librarians doing on-the-ground work. When talking about the complexities of librarians’ work (or service), Trevor Muñoz points out the significance of the venue of publication for the first major special issue on digital humanities librarianship: “Attending critically to this context means noting that this very welcome special issue on digital humanities and libraries was published in journal devoted to library administration” (emphasis in original). However, I would like to point out the significance of framing digital humanities as, primarily, a discussion for library administrators. It is, of course. However, it also contributes to the idea of DH in libraries as being a top-down issue, rather than one that is done in exploratory ways by librarians that feeds up into wider library (and, yes, university) goals.  

Even the promotional materials for the Invisible Work Symposium betrays some of the underlying sentiment about the role that libraries play in the wider university community. From the announcement:

Imagine, for example, a typical project between a professor of history and a university digital scholarship center. Is the digital scholarship center simply providing a service, or are they considered an equal partner in the work? […] Similarly, the digital scholarship center might be thinking about recycling the resulting code for use in other projects, contributing to broader digital scholarly efforts, and so on.

In this scenario, the labor of the “digital scholarship center” is always collectivized and always working with the intention of feeding into broader efforts. The assumption that there is always one mission for a group of library staff and that this mission is univalent and universally agreed-upon. I think that this view reduces the impact that individual librarians actually play in research projects. Which is not to say that libraries don’t have unified (and often stated) goals. Libraries frequently use strategic initiatives to promote specific areas, focus collection development and digitization around specific subjects, and play to the strengths of their employees and the wider university community. However, I’d like to posit that this is no different than how departments look for candidates in key areas or conduct cluster hires for faculty positions.

I think the main problem is that flattening the various perspectives and individual research interests of librarians exacerbates perceptions of library staff as “in service.” By acting as if librarians prioritize research solely upon the basis of administrative-level or department-wide mandates, we are basically saying that the work of librarians is fungible: “Anyone who can do this prescribed work in a procedural manner is qualified to do this job.” In treating the laborers who build and sustain infrastructure, design metadata schemas, and preserve and provide access to research as essentially fungible we are treating library spaces as neutral and failing to acknowledge the rhetorical and political impact of universities as sites of knowledge production. Pushing back against this notion is especially critical in a time when administrators see libraries as primarily empty student space, and when outsiders ask “Why do you need libraries/librarians when you have Google?

Since so many of the methods from the digital humanities are the intellectual descendents of research done in library and information science, it makes sense that librarians would own their intellectual contributions to DH work. In order to give librarians the institutional power to assert their ownership of their research, it is essential for us to acknowledge that library employees’ research agendas are not simply derivative of wider library goals (generated in some sort of nondescript aether of environmental scans). Rather the opposite is the case: the research interests of individual employees are essential to shaping the type of work that is done at an institutional level.

Discover DH: An Introduction to Digital Humanities Theories and Methods

For budding digital humanists, it can often be difficult to know what you need to learn. On top of writing for courses, exams, presentations, and learning the traditional work of your field, you now need to learn a series of unfamiliar methods and terms (many of them opaque acronyms: RDF, TEI, JSON). Even knowing where to ask for help is a challenge, since DH resources are frequently scattered across campus.

A sample of the FSU DH network.

If you’re attuned to channels of communication in the digital humanities, you’ve probably seen a lot of learning opportunities this summer: DHSI in Victoria, HILT in Indiana, the DH conference (in Kraków this year). All of these are excellent places to immerse yourself in the field of digital humanities and to learn about the great work current scholars in the field are doing. There’s only one problem: these conferences and training events are prohibitively expensive. Even with scholarships and waived tuition, it can be very difficult to get yourself across the country (or the globe!) to learn about DH, especially if you’re in school.

This is why the Office of Digital Research and Scholarship is offering a 10-week workshop series on topics in the digital humanities. These classes are designed with busy students and scholars in mind. We will be offering two sessions per each weekly course, with one session in Strozier library and another in a different building on campus. The workshops are divided into “hack” and “yack”: sessions that are discussion-based and sessions focusing on learning a new tool or DH skill, respectively.

We’ll be offering sessions on the following topics:

  • Getting Started in the Digital Humanities
  • Markdown and GitHub
  • Managing Digital Projects
  • Text Analysis and Visualization
  • Copyright and Digital Projects
  • Introduction to Text Encoding
  • Digital Tools in the Classroom
  • Network Visualization
  • Mapping
  • Publishing in the Digital Humanities

More details about the individual sessions and scheduling are at the Digital Research and Scholarship website. You can also register for individual workshops on our calendar.

Come join us in exploring this exciting new area!

Wikipedia Edit-a-thon: Using maps to fill in gaps

Maps have long served as a tool for colonialism, by promoting conquest, dividing up land, and asserting ownership. This 16th century Europa Regina map exemplifies this, by positioning Europe as a world ruler, and sequestering other continents off to the sidelines. Indeed, maps often distort the size and shape of the world, so that European powers seem the most prominent and powerful within the image.

Map of Europe as a queen, printed by Sebastian Munster in Basel in 1570.

The advent of many new digital tools has given us means to push back against the dominant narratives that maps tell us about our world. Tools like this map puzzle allow us to see how projections distort the sizes of certain countries. Mapping projects like those at Radical Cartography give us a window into how we can represent geospatial information differently and critically.

Digital platforms like Wikipedia also give us the opportunity to present new and different information about the world that could not necessarily be contained in paper resources. In that spirit, the Office of Digital Research and Scholarship is hosting a Wikipedia Edit-a-thon, in collaboration with the Department of Art History and the Caribes project. The focus of the edit-a-thon will be topics related to the Caribbean. We will edit and correct existing pages to add more resources, citations, and information. We will also create new pages about important topics that have not been entered into Wikipedia yet.

A map of all the the Caribbean buildings that have stub articles in Wikipedia. Generated for FSU Libraries’ Caribbean Wikipedia Edit-a-thon 4/16/2016.

For the occasion, I decided to create a map of the Wikipedia stubs related to the Caribbean. I focused on the Caribbean buildings and structures stub list, since this project is a collaboration with an Caribbean architecture project (Caribes). I found the geographical coordinates for all of the entries that were categorized as stubs. This allows us to visualize the areas of the Caribbean that have less complete information in Wikipedia.

Mapping could be used to visualize many other gaps in Wikipedia’s information base, and this is certainly not the only area in which Wikipedia is lacking. However, digital tools and resources like digital maps and Wikipedia could allow us to shift the focus towards important but underrepresented figures, events, and movements in the world’s history. However, we can only accomplish this if we put time into building out those information sources.

Join us this Thursday April 14th, from 10a-2p in the Art and Design Library (2020 WJB) for the Caribbean Wikipedia edit-a-thon. Bring a laptop if you can, and we’ll provide the training, treats, and text resources.

So FSU passed an open access policy… what does that mean for me?

Perhaps you are a new professor at Florida State University. And perhaps you have some articles you would like to publish. However, there are a few things getting in your way:

  1. Publishing contracts often confusing and restrictive, leaving faculty with little control over their work once it has been published
  2. The journals you would like to publish in often keep your work behind a paywall so that only a fraction of the world’s population can access it (which decreases your the impact of your research)
  3. Journals that do allow you to make your work openly available often have high article processing charges (APCs) which you can’t necessarily afford

Two recent developments may help you with these conundrums. The first is the Faculty Senate Open Access Policy. This policy was passed by unanimous vote on February 17th of this year. It creates a safe harbor for faculty intellectual property rights by granting FSU permission to share scholarly journal articles for non-commercial purposes. Basically, this gives faculty the language to avoid overly-restrictive publication contracts, and allows them to more easily share their work, despite publishers’ efforts to put scholarship behind a paywall.

The launch of DigiNole: FSU’s Research Repository comes on the heels of the OA Policy, and provides faculty with a platform for making their research publicly available online. DigiNole is an open access repository, which allows anyone to view the scholarship contained within it. By making all of FSU’s articles available in repositories like DigiNole, scholars and researchers can increase the visibility and impact of their research by 50-500%, according to several studies. You can even track your impact more easily with DigiNole, since faculty who deposit their scholarship get monthly readership reports with analytics on the use of their scholarship. Having easy access to these numbers can help with hiring and promotion, as it gives you concrete and tangible evidence of your impact.

The Office of Digital Research and Scholarship at the University Libraries specializes in academic publishing and open access. If you have any questions about DigiNole or the OA policy, contact Devin Soper (850.645.2600), Scholarly Communications Librarian at Strozier Library.

Visualizing FSU’s Digital Scholarship Network

It can be difficult to get started in interdisciplinary fields like the digital humanities, since people and resources are sometimes fractured and spread across different departments, schools, and even institutions. As a new staff member, I encountered this problem first hand. I often needed to know about the happenings in digital humanities around campus, but struggled to find out what goes on outside of my own department in the library. Since I am a member of the Office of Digital Research and Scholarship (and since we are always trying out and teaching digital research tools), I decided to use Gephi to solve this problem.

Gephi is used to create network graphs—visualizations that show connections between different things. The “things” that we are trying to connect are called “nodes” and the connections themselves are called “edges.” Scott Weingart’s excellent “Demystifying Networks, Parts I & II” provides a detailed overview of the terminology and logistics of networks.

In the context of my project, the “nodes” are people, projects, and places on FSU’s campus. The “edges,” lines, connect people to places and projects, and projects to places. All this data was compiled into two separate comma separated value (.csv) files: one that described the different nodes, and another that showed which nodes were connected to each other.¹ I then uploaded these files into Gephi’s data laboratory.

Gephi automatically generated a very simple, grey, and bland network graph. I then edited the view so that nodes displayed different colors depending on what type of node it was. People are purple, projects are green, and “places” (departments/discussion groups) are red. I then changed the display so that the nodes were generally evenly spaced, which allowed for better visibility.² I also made the node labels visible, which allows you to see the names of the different entities in the digital scholarship environment at FSU. And here’s what the graph looks like!

The FSU Digital Scholarship Network. For a larger, better-quality version of this image, click here

Now, this visualization is nowhere near the complete network of people doing digital work at FSU. It was really only generated from the people and projects that the Office of Digital Research and Scholarship has encountered in our short existence, but we will continue to expand this list as more people engage within and across this network at FSU. Our hope is that by visualizing the interconnectedness of different scholarly activities at FSU will facilitate the creation of new and better knowledge.³

If you are interested in starting a digital research project, but find this visualization overwhelming, please stop by the Percolator: our “Digital Scholarship Support Group”, every Wednesday from 3-5 on the lower level of Strozier in the Technology and Digital Scholarship suite. The Percolator is an informal space to workshop project proposals, explore new tools, and discuss issues in the field of digital scholarship.

FSU Libraries’ newly formed Office of Digital Research and Scholarship (DRS) provides support, infrastructure and consulting for technology-focused research projects in the areas of digital humanities, academic/digital publishing, data management, and more. We are focused on connecting people to people, building collaborative research partnerships across campus, and providing platforms for new forms of scholarship. Visit lib.fsu.edu/drs for more information.

Notes

¹ You can find the .csv files for this document here.

² I used a layout based on the “Fruchterman Reingold” algorithm, if you are looking to generate a graph like this one.

³ Are you doing digital work at FSU and not yet on our list? Add yourself here!

Copyright in the Age of Digital Reproduction

* Editorial Note about Monkey Selfie – PIC BY A WILD MONKEY / DAVID SLATER / CATERS NEWS – (PICTURED: One of the photos that the monkey took with Davids camera. 1 of 2: This photo was the original photo the monkey took) – The photographer behind the famous monkey selfie picture is threatening to take legal action against Wikimedia after they refused to remove his picture because ‘the monkey took it’. David Slater, from Coleford, Gloucestershire, was taking photos of macaques on the Indonesian island of Sulawesi in 2011 when the animals began to investigate his equipment. A black crested macaque appeared to be checking out its appearance in the lens and it wasn’t long before it hijacked the camera and began snapping away. Learn more at https://en.wikipedia.org/wiki/Monkey_selfie.

As a recent addition to the Office of Digital Research and Scholarship, I am relatively new to the discussions surrounding copyright that occur in libraries. My academic background is in medieval and early modern literature, so I have not had to think terribly hard about fair use; all of the works I write about are in the public domain. I once contemplated using an image from Early English Books Online in my thesis, but I took one look at the requirements for obtaining permission, cried, and continued writing my thesis without images.

Continue reading Copyright in the Age of Digital Reproduction

Academia dot ed[you]?

A piece published in the Chronicle of Higher Education last week criticized Academia.edu and sparked discussion surrounding the role of for-profit companies in facilitating sharing and allowing access to scholarship. Or perhaps it is better to say “reignited,” as many of the issues brought up in the piece are topics of discussion in scholarly communications and other areas for some time now.¹ The main concern of academics is that sites like Academia.edu and ResearchGate are profiting off the work of academics and universities, and adding little to support and provide access to research.

This and other concerns were published in a post written by the University of California’s Office of Scholarly Communications just a day before the Chronicle piece. UC’s post outlines the differences between scholarly social networking platforms (like Academia.edu) and institutional repositories—university-specific platforms that make scholarship openly available. They stress that academic social networking sites do not actually fulfill open access requirements that more research funding agencies are imposing on authors. At Florida State, University Libraries hosts and manages a Research Repository, engaging the campus community in questions of access, impact, and shared scholarly goals. Repositories like DigiNole create the opportunity for easier access to scholarly work, meaning that more people can download and cite it, unlike Academia.edu and ResearchGate, which both require log-ins to view material in full.

Continue reading Academia dot ed[you]?