Michael J. Price Lab for Digital Humanities

Machine-Aided Close Listening and the Performed Poem

Machine-Aided Close Listening and the Performed Poem

Chris Mustazza

Ph.D. student, English Literature

Funding Period: 
March, 2016

Assistant:

Zoe Stoller
CAS '18 (English & Creative Writing)

 

Since Charles Bernstein’s Close Listening, the study of the performance of a poem alongside its text has been of rapidly increasing interest to scholars. The field of phonotextuality has also given birth to large-scale projects of empirical analysis of poetry audio, such as Tanya E. Clement’s concept of Distant Listening, which uses high-performance computing technology to analyze large corpora of poetry audio.  Clement pursues questions such as whether a computer can be trained to locate every instance of applause across the entire PennSound archive, an archive that contains over 60,000 audio files. I propose “Machine-Aided Close Listening” lies somewhere on the spectrum of close listening to distant listening: I am interested in whether visualizations of the performance of a single poem can aid in an exegesis of that poem. In other words, can the ability to see aural facets like pitch dynamics and loudness modulations allow us to explore a poem from a new distance? Can close listening, empirically bolstered by the aid of a computer open new possibilities for the study of poetry?

The aim of the “Machine-Aided Close Listening” project is to create a user-facing tool that displays an alignment of three dimensions: 1) the text of a poem, 2) the audio of a performance of it, and 3) a visualization of that audio (likely a spectrogram). These alignments will be stable objects (at a fixed URL) intended to be cited in essays that take up questions around how sound can extend or complicate the content and/or visual form of a poem. The current PennSound text-audio aligner will be the basis for the application, and it will be extended to include a spectrogram that exposes F0 (the fundamental pitch of the voice in the recording). Possible second-phase extensions include different visualizations, such as pitch curves or waveforms. Each alignment will be of a single poem, and there will be a one-to-one correspondence between URL and alignment, thus allowing for a stable source to be cited in publications.

At this time, most of the tools that allow for this kind of analysis are desktop applications (rather than websites) and not well suited to poetry (particularly lineated/spaced content). This project will provide publicly facing alignments, made from permissioned text and audio from the PennSound archive. The audience will be teachers, students, and scholars. There is a strong possibility for working this tool into the classroom as a new mode of poetry pedagogy, in addition to its scholarly uses. And the build will be approached with scalability in mind, to leave open the possibility of using the app in heavy-traffic environments, such as poetry MOOCs.

The first alignments will be of two of James Weldon Johnson’s sermon-poems from his 1927 collection God’s Trombones. For more on the special relationship between text and sound in these poems, see Mustazza’s essay on the Johnson recordings (forthcoming Oral Tradition).

Keywords: