There exist several recurring debates in the digital humanities. Or rather maybe we should position these debates as between digital humanities and humanities proper. One that is particularly thorny is the “Do you need to know how to code?” debate. In my experience it is also frequently aliased as the “Should all humanists become programmers?” debate. One memorable event in the debate was Stephen Ramsay’s (2011) remark “Do you have to know how to code? I’m a tenured professor of Digital Humanities and I say ‘yes.’” A sure fire starter. Ramsay used the metaphor of building to describe coding work done in DH. Taking up on this Andrew Prescott (2012) argued that in most humanities software building DH researchers seemed to be uncomfortably in the backseat. Most non digital humanities PIs seem to regard developing software as a support act without intrinsic scientific merit, Prescott used to word ‘donkeywork’ to express what he generally experienced humanities researchers were thinking of software development. Prescott reasoned that as long as digital humanities researchers were not in the driver seat DH would remain a field lacking an intellectual agenda of its own.
I agree: in a service or support role DH nor coding will ever develop their full intellectual potential for the humanities. As long as it is donkeywork it will be a mere re-expression and remediaton of what went before. The problem there is that the donkey has to cast his or her epistemic phenomenology towards the concepts and relations of the phenomenology of the humanities PI. In such casting there will be mismatches and unrealized possibilities for modeling the domain, the problem, data, and the relations between them. It is most literally like a translation, but a warped and skewed one. Like what would result if the PI was to request a German translation of his English text but requiring it being written according to English syntax and ignoring the partial incommensurability of semantic items like ‘Dasein’ and ‘being’. Or compare it to commissioning a painting from Van Gogh but requiring it be done in the style of Rembrandt. The result would be interesting no doubt, but neither something that would satisfy the patron or the artist. The benefactor would get a not quite proper Rembrandt. And, for the argument here more essential, the artist under these restrictions would not be able to develop his own language of forms and style. He would be severely hampered in his expression and interpretation.
This discrepancy between the contexts of interpretation through code and through humanistic inquiry we find reflected I think in the way DH-ers tend to talk about their analytical methods as two realms separated. The best known of these metaphors is that of the contrast between ‘close’ and ‘distant’ reading, initiated by the works of Franco Morreti (2013). Ramsay (2011b) and Kirschenbaum (2008) also clearly differentiate between two levels or modes of analysis. One is a micro perspective, the other operates within a macro-level scope. Kirschenbaum described the switching from computational analysis of bit level data to putting up a meaningful perspective on the hermeneutic level of a synthesis narrative as “shuttling” back and forth between micro and macro modes of analysis. Martin Mueller (2012) in turn wished for an approach of “scalable reading” that would be able to make this switching between ‘close’ and ‘distant’ forms of analysis less hard, the shuttling more seamless.
We have microscopes and telescopes, what we lack is a tele-zoom lens. A way of seamlessly connecting the close with the distant. Without it these modes of analysis will stay well apart because the ‘scientistic’ view of computer analysis as objective forsakes the rich tradition of humanistic inquiry, as Hayles remarks (2012). Distant reading as analytic coding does gear towards an intellectual deployment of code (Ramsay 2011b). But the analytic reach of quantitative approaches is still quite unimpressive. I say this while stressing that this is not the same as ‘inadequate’, I dare bet there is beef in our methods (Scheinfeldt 2010). But although we count words in ever more subtle statistical ways to for instance analyze style, the reductive nature of these methods seems to kill the noise that is often relevant to much scholarly research (McGann 2015). For now it remains striking that the results of these approaches are confirmation oriented more than resulting in new questions or hypotheses; mostly they seem to reiterate well known hypotheses. Nevertheless, current examples of computational analyses could very well be the baby steps on a road towards a data driven approach to humanities research.
Thus if there is intellectual merit in a non-service role of code, then why do the realms of coding and humanistic inquiry stay so well apart as they seem to do? Let’s for a moment pass by the easy arguments that are all too often just there to serve the agenda of some sub-cultures in the humanities community. It is not a lack of transferable skills. I can teach 10 year old girls HTML in 30 minutes, everyone can learn to code. It is not an inherent conservative and technology averse nature of humanities (Courant 2006). Like any community the humanities has its conservative pockets and its idealist innovators. No, somehow the problem lies with computation and coding itself. Apparently we have not yet found the principles and form of computing that allow it to treat the complex nature of noisy humanities data and the even more complex nature of humanities’ abductive reasoning. That is, reasoning based more on what is plausible than what is provable or solvable as an equation. Humanities are about problematizing what we see, feel, and experience; about creating various and diverse perspectives so that the one interpretation can be compared to the other, enriching us with various informed views. Such various but differing views and interpretations are a type of knowledge too, albeit a different kind of knowledge than that results from quantification (Ramsay 2011b:7). These views acquire a scholarly or scientific status once they are rigorously tried, debated, and peer reviewed.
One of the aspects that sets humanities arguments apart from other types of scientific reasoning and analysis is its strong relation to and reliance on narrative. Narrative is the glue of humanities’ abductive logic. But code has narratological aspects too. As Donald Knuth has argued there is a literacy of code (Knuth 1984). Most humanities scholars are most literally illiterate in this realm. Yet many of the illiterate demand the intellectual primacy over code reliant research in the humanities. But to create an adequate intellectual narrative you need to be well versed in the language you’re using, you must be literate. I am not a tenured professor of digital humanities, but just the same I dare posit that you can not wield code as an intellectual tool if you are not literate in it.
Does this mean that the realms of humanities oriented computation and of humanistic abductive inquiry must stay apart? No, it means that non code literate humanists should grand those literate in code and humanities the time and space to develop the intellectual agenda of code in the humanities. But at the same time should those literate in code reflect on their mimicry of a ‘scientistic’ empiricism. The intellectual agenda of humanities is not to plow aimlessly through ever more data. Number crunching is a mere base prerequisite even within its own narrow understanding of scientific style. Only when we get into making sense of these numbers, of applying interpretation to them, we unleash the full power of the humanistic tradition. And making sense is all about building meaningful perspectives through the creation of narratives. The computational literate in the humanities need to figure out the intellectual agenda of digital humanities, and they need to develop their own style of scientific and intellectual narrative that connects it to the main stream intellectual effort of the humanities.
With all this in mind it is encouraging to learn that the Jupyter Notebook Project acquired substantial funding for further development (Perez 2015). We do not have that dreamed of tele-zoom, that scalable mode of reading. But Jupyter Notebooks may well be an ingredient of the glue needed to link the intellectual effort of humanities coding to mainstream humanities discourse. These Notebooks started out as a tool for interactive teaching of Python coding. The iPython Notebooks developed into computer language agnostic Jupyter Notebooks that allow the mixing of computer and human language narrative. In Jupyter Notebooks text and code integrate to clarify and support each other. The performative aspects of code and text are bundled to express the intellectual merit of both. Fernando Perez and Brian Granger (2015) developed their funding proposal strongly around the concept of computational narrative: “Computers are good at consuming, producing and processing data. Humans, on the other hand, process the world through narratives. Thus, in order for data, and the computations that process and visualize that data, to be useful for humans, they must be embedded into a narrative—a computational narrative—that tells a story for a particular audience and context.”
Hopefully the Jupyter Notebooks will be part of a leveling of the playing field for both narratively inclined and computationally oriented humanities scholars. Hopefully they will become a true middle-ground for computational and humanistic narrative to meet, mix, and grow from a methodological pidgin into a mature new semiotic system for humanistic intellectual inquiry.
Courant, P.N. et al., 2006. Our Cultural Commonwealth: The report of the American Council of Learned Societies’ Commission on Cyberinfrastructure for Humanities and Social Sciences. University of Southern California.
Hayles, K.N., 2012. How We Think: Digital Media and Contemporary Technogenesis, Chicago (US): University of Chicago Press.
Kirschenbaum, M., 2008. Mechanisms: New Media and the Forensic Imagination, MIT.
Knuth, D.E., 1984. Literate Programming. The Computer Journal, 27(1), pp.97–111.
McGann, J., 2015. Truth and Method: Humanities Scholarship as a Science of Exceptions. Interdisciplinary Science Reviewd, 40(2), pp.204–218.
Moretti, F., 2013. Distant Reading, London: Verso.
Mueller, M., 2012. Scalable Reading. Scalable Reading—dedicated to DATA: digitally assisted text analysis. Available at: https://scalablereading.northwestern.edu/scalable-reading/ [Accessed September 22, 2015].
Perez, F., 2015. New funding for Jupyter. Project Jupyter: Interactive Computing. Available at: http://blog.jupyter.org/2015/07/07/project-jupyter-computational-narratives-as-the-engine-of-collaborative-data-science/ [Accessed October 1, 2015].
Perez, F. & Granger, B.E., 2015. Project Jupyter: Computational Narratives as the Engine of Collaborative Data Science. Project Jupyter: Interactive Computing. Available at: http://blog.jupyter.org/2015/07/07/jupyter-funding-2015/ [Accessed October 1, 2015].
Prescott, A., 2012. To Code or Not to Code? Digital Riffs: extemporisations, excursions, and explorations in the digital humanities. Available at: http://digitalriffs.blogspot.nl/2012/04/to-code-or-not-to-code.html [Accessed October 1, 2015].
Ramsay, S., 2011a. On Building. Stephen Ramsay — Blog. Available at: http://stephenramsay.us/text/2011/01/11/on-building/.
Ramsay, S., 2011b. Reading Machines: Toward an Algorithmic Criticism (Topics in the Digital Humanities), Chicago (US): University of Illinois Press.
Scheinfeldt, T., 2010. Where’s the Beef? Does Digital Humanities Have to Answer Questions? Found History. Available at: http://foundhistory.org/2010/05/wheres-the-beef-does-digital-humanities-have-to-answer-questions/ [Accessed October 1, 2015].