It’s rare that you even want to look back at something you’ve written before, especially when that thing is a dusty dissertation over thirty years old. Yet, I recall a section that could bear new fruit, maybe, if newer technologies could be applied. And so this project returns to a 1980s example of humanities computing to rerun it — no, deepen it — using 2010s technology. What newness might arise?

Besides, in my new job, I need to learn how to use the Duke Condor Grid, CI-Connect, and the Open Science Grid.

What I did in the Dinosaur Era. In 1986 or so, I used Pascal to poke at a sixteenth-century literary work by Thomas More called A Dialogue of Comfort Against Tribulation (1554), quantifying a structure and progression of the dialogue.  I had to build up data sets from a handy bound “Everyman Library” edition, breaking up and encoding in segments roughly a third of a page long, since I wasn’t about to type in the whole work. In fact I don’t think I even had computer resources at hand to do a full transcription, since I was working off of rather constrained storage on 800K floppy discs (at best). My tools were Borland’s “TurboPascal” and “PCWrite,” both running on MS-DOS — not even on pretty Windows 3.1! The Pascal programs I wrote served to parse and count, grouping the encoded data to help show a structure of the work. I was actually quite the Pascal programmer.

moore

I can’t say that my dissertation (Res et Verbum: Rhetorical Unities of Act and Word, 1987) was influential, but it did follow some sparsely trod humanities computing paths in its time. The residues of this computation (and that’s probably all one can call them) are on pages 120-121 in a long footnote describing the method, a table, and a paragraph. This was the mid-1980s, after all: the Dinosaur Era in computing and a time of relatively few scholarly computing resources. There was, at the time, no Project Gutenberg edition of More’s Dialogue of Comfort, though by then the project had been occupying typists for about a decade and a half. Linguists and grammarians had just begun thinking about how to harness computational tools. Literary critics and scholars saw computers as extensions of the typewriter, at best. (I note now that I was skeptical about using a machine and reducing More’s text to mere data: “Obviously, the data was entered with judgments that could not be labelled scientific or objective,” I footnoted in 1987, “but then such labels would be inappropriate in any case — and undesirable in the context of humane studies.” Hm. I think I would qualify this differently today.)

Today, the scholarly landscape is different. The Dialogue of Comfort is downloadable — “EBook number 17075,” released on November 16, 2005. Computational methods in language study have been transformed in academe, and industry has taken interest in language analysis. “Search” of course is a phenomenon rooted in language, and so considerable computer science/programming talents have been applied to problems of language. (Often this work has been wonderfully creative. See Halevy, Norvig, Pereira “The Unreasonable Effectiveness of Data” DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MIS.2009.36. If you’re not a member, IEEE will ding you $19, but search the Google.) Computational manipulation and analysis of language has made huge strides, and many of the products — such as parsers and tagging tools and databases — are freely available on the Internet.

The point is that the meager effort that I did in the mid-1980s now could be redone with more robust tools and resources while teaching me the basics (and joys) of using the Open Science Grid. And this humanistic pursuit also has a place in today’s research computing.

The idea. My original stab at computing-in-lit in 1986 sought to show how the structure of a dialogue showed a progression and development of characters in the work. That progression moved from “contention” between the parties — an old, sickly man and a young man entering his prime — toward “conversation” and healing interaction. Now I can use a complete text, and I can choose among many parsers and taggers that characterize the parts of speech or grammar of the piece. I can use databases of vocabulary linked with “affect” — how individual words call up feelings. And there are some interesting web-based tools that help to analyze and visualize texts.

Over the course of the semester, I’ll chronicle this project. I’m fitting it into the flow of usual research computing work, in part for “professional development” and learning The Grid and in part to “go back to the well” of sixteenth-century literature and history.

Francis Bacon allowed that “some Bookes also may be read by Deputy, and Extracts made of them by Others” (“Of Studies“), and in some sense that’s what this little project does — except that the “Deputy” is a machine governed by an algorithm. The utility of the thing comes from the application of an algorithm — a series of set practices rigidly applied in this case. Such things can show patterns in other sorts of data, and the task of this analysis is simply to see whether a couple of patterns are reflected in a sixteenth-century literary work. Does the nature of a dialogue change in the work, perhaps reflecting the “comfort” shared as the conversation develops? Do words change in the “affect” as well?

— Mark R. DeLong, PhD (mark.delong@duke.edu)