Natalie Berkman is a doctoral candidate in the Department of French and Italian at Princeton University, currently working on her dissertation on the mathematical methods of the Oulipo under the direction of Professor David Bellos. An associated member of the ANR DifdePo, she is also coordinator of the transcription team for the Oulipo archival transcription project. Her dissertation examines various branches of mathematical thought — set theory, algebra, combinatorics, algorithms, and geometry — on the philosophy and production of the Oulipo and the reception of Oulipian texts. She is currently working on completing one of the Princeton Center for Digital Humanities inaugural projects for the 2015-6 academic year, which consists of digital annexes for her dissertation that she is programming herself in Python.
This is the source
The formally constrained work of the Oulipo (l’Ouvroir de Littérature Potentielle, loosely translated as Workshop of Potential Literature) lends itself particularly well to digital studies, which was quickly recognized by the members of the group. To facilitate its goal of avoiding chance in its literary production, the group was naturally drawn to the determinism of computers, where true chance is simply impossible. In its early years, therefore, the group used algorithmic procedures as a starting point for various texts and also attempted to program these texts on actual computers, creating some of the first electronic literature and embarking on proto-digital humanities work as early as the 1960s and 1970s, later abandoning these efforts and relegating all subsequent activity to a subsidiary group.
To understand the Oulipo's forays into computer science and more importantly, why
they abandoned them, I designed and carried out one of the inaugural projects of
the Princeton Center for Digital Humanities. The goal was twofold: first,
through exploratory programming, I intended to create interactive, digital
annexes to accompany my doctoral dissertation; more importantly, I hoped that by
attempting to reproduce the Oulipo's own algorithmic efforts, I would gain
similar insights into the nature of Potential Literature
and be able to
understand why the group abandoned such efforts after the 1970s.
This article describes the content, development, and results of my project. For each of my three Python-based annexes, I offer a historical survey of the Oulipian text or procedure discussed within and the Oulipo’s own proto-digital humanities experiments; then, I will talk about my own experiences as a coder-researcher, what learning Python has brought to my project, and how my exploratory programming offered me a new kind of critical reflection. Establishing these annexes forced me to learn to code, a type of work that does not only produce digital texts, but also helped me to reflect on the notion of chance in a more nuanced way. Finally, coding has allowed me to better understand the Oulipian mentality concerning this sort of digital experimentation.
Is Digital Humanities work creation or research, or some new hybrid?
The Ouvroir de Litterature Potentielle (OuLiPo, loosely translated into English
as the Workshop for Potential Literature), an experimental writing group founded
in Paris in 1960, attempts to apply mathematical procedures to literature,
inventing procedures (known as constraints) to follow
during the composition of a text. One of the main goals of this strategy is to
reduce the role of chance, and it is therefore unsurprising that computers were
one of the first items on its early agenda. Computers are utterly incompatible
with the notion of chance, and in theory should have been a perfect, and rather
timely solution. Indeed, the founding of the Oulipo coincided with a critical
stage in the development of computers, and the group took full advantage
thereof, programming their texts and procedures through partnerships with Bull
Computers and the Centre Pompidou. Early computing put one in much closer
contact with the basic fabric of coding and the members quickly learned that
writing code requires one to divide a problem into its simplest, logical
components, very much like the elementary procedures they were inventing in the
group's formative years. However, by the 1970s, the group abandoned such
efforts, relegating all future algorithmic experimentation to a subsidiary.
In recent years, the field of digital humanities has been gaining popularity.
While far from nascent (the origins can be dated back to as early as the 1940s),
the discipline has failed to define itself, often preferring a broad scope that
situates its activity at the intersection of computing, technology, and
humanities scholarship. In theory, this sort of vague definition could be seen
as beneficial, encompassing a wide range of tools and techniques that can be
applied to humanistic work. In practice, while a great variety of scholarship
has been and is currently being undertaken that claims to use digital humanities
practices, a majority of this work seems to fall under the categories of textual
encoding, the creation of digital archives, and the use of ready-made tools to
run some type of analysis on a digital text or visualize data from it. The fact
that digital humanists do not seem to prioritize learning to program has been
lamented on several occasions by Nick Montfort, both in his own treatise on
exploratory programming for humanistic inquiry
However, the goal of this article is not to give a critical overview of the use
of Oulipo studies for digital humanities. It is rather to demonstrate how I was
able to use exploratory programming to understand the Oulipo's forays into
computer science and more importantly, why the group abandoned such initiatives.
To this end, I designed and carried out one of the inaugural projects of the
Princeton Center for Digital Humanities under the guidance of Cliff
WulfmanPotential Literature
and be able to understand why the group
abandoned such efforts. My original intention was to create five digital annexes
(of which I only completed three that would constitute the final product)
This article describes the content, development, and results of my project. For each of my three Python-based annexes, I offer a historical survey of the Oulipian text or procedure discussed within and the Oulipo’s own proto-digital humanities experiments; then, I will talk about my own experiences as a coder-researcher, what learning Python has brought to my project, and how my exploratory programming offered me a new kind of critical reflection. Establishing these annexes forced me to learn to code, a type of work that does not only produce digital texts, but also helped me to reflect on the notion of chance in a more nuanced way. Finally, coding has allowed me to better understand the Oulipian mentality concerning this sort of digital experimentation. Carrying out a digital humanities project on the Oulipo — itself a quasi-academic research group that both analyzes and creates — resulted in a hybrid experiment with multiple, varied results and insights, which helped me both understand the history and development of Oulipian aesthetics as well as understand their texts better. While unconventional given the nature of digital humanities work today, the results of my research demonstrate a productive use of programming as both a creative and analytic exercise that can undoubtedly prove fruitful in future digital humanities work.
Published in 1961 by Oulipian cofounder Raymond Queneau and followed by a critical essay by his fellow cofounder François Le Lionnais, the
Oulipiantext and therefore serves as an illustration of the group’s initial goals and influences, one of the main ones being computer science.
In the preface, unconventionally called a
theme and continuity,however the sonnets produced by the system do not have that same
charm.
With 10 sonnets of 14 lines each, adhering to these three simple rules,
Queneau’s system is capable of producing 1014
potential is a double entendre —
Before reaching the poems, the reader is faced with an epigraph by Alan
Turing: Only a machine can read a
sonnet written by another machine
Seule une
machine peut lire un sonnet écrit par une autre
machine.
...I do not see why it [a computer] should not enter any one of the fields normally covered by the human intellect, and eventually compete on equal terms. I do not think you can even draw the line about sonnets, though the comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine
Note the use of the passive voice in the original Turing citation. Queneau (who understood English too well to make such a blatant translation error) makes the machine the subject of his epigraph. The reader of his volume, he claims while ventriloquizing Turing, is a computer. The physical conception of the volume does indeed support this proposition: rather than taking pleasure in reading an exponential number of sonnets (most of which are an incoherent jumble of verses from prewritten sonnets, devoid of a theme and written by no one), the reader must manipulate the verses of Queneau’s original poems, cut into strips to produce a functional book-machine hybrid. The Oulipo quickly turned to actual machines to reimagine this foundational text, literally programming a combinatorial poetry collection that already owed much to computers.
Before designing my own digital editions, I needed to take into consideration what the Oulipo had already attempted. The
After deciding at their second meeting to seek out computers to pursue various
types of analytic work, the founding members of the Oulipo came into contact
with Dmitri Starynkevitch, a French-born Russian immigrant, who was working for
Bull Computers at the time. He began Oulipian computer work on the CAB 500
computer that operated with the PAF language (Programmation Automatique des
Formules)
In August 1961, Starynkevitch sent Queneau excerpts from the
We would like M. Starynkevitch to detail the method he used: we hope that the choice of the verses was not left to chance
On souhaita que M. Starynkevitch nous précise la méthode utilisée ; on espéra que le choix des vers ne fut pas laissé au hasard.
After their collaboration with Starynkevitch, the Oulipo had one final group effort in the 1970s, a project known as the
a possible agreement between computer science and literary creation
un possible accord entre l’informatique et la création littéraire.
The first experimental text was Queneau’s
“The printed collection is very nicely conceived but the manipulation of the strips on which each verse is printed is sometimes delicate
Le recueil imprimé est très joliment conçu mais la manipulation des languettes sur lesquelles chaque vers est imprimé est parfois délicate.
This method responds to the Oulipo’s earlier question about Starynkevitch’s
program regarding the method of choosing the verses. However, this new program
is still surprisingly restrictive. Mark Wolff acknowledges that this program has
more Such a program has no potential in
the Oulipian sense because random numbers produce aleatory effects. The
original algorithm preserves an active role for the user, even if that
role requires the minimal engagement of typing one's name in order to
sustain the creative process
Given my understanding of the text and the Oulipo's multiple attempts to program it, I began my project with the
To transform Queneau’s work, I first had to visualize the original 10 sonnets
in computer science terms, as an array of arrays. In computer science, an array is a type of data structure consisting of a collection of
elements, each identified by at least one index or key. A sonnet, by
definition, is already an array of fourteen verses that can each be
attributed a numbered index. In order to make Queneau’s sonnets intelligible
to a computer program, I needed to rewrite them as an array of arrays: ten sonnets of fourteen
lines each. Once I had this data structure, I had to learn to write programs
that would pick a single verse from one of the original sonnets, building
upon that to design a program that would generate a pseudo-random sonnet
from a 14-character key (the digits of this key would each be between 0 and
9, indicating which of the original 10 sonnets to take the verse from).
Finally came the creative part: I had to decide how to produce
chance. In the end, I came
up with three different ways
Ultimately, my annex is less than satisfying. Not only does the reader of my
edition still have no freedom, but he/she is not even allowed into the
process. The only person who had any fun in this is me. That said, perhaps
Through programming, I have gained a more thorough understanding of this text
and its implications, but I also understand how Queneau’s choice of
extremely specific subject matter for each of his original poems is the key
factor in the hilarity of the potential poems. Each verse of Queneau’s
original poems is essentially
In a similar vein, this method can be adapted to interpret and analyze all combinatorial poetry. Poetry, as the Oulipo was well aware, by its fixed forms and propensity for patterns, can be inherently combinatorial. The Oulipo was not the first in noting this property — the Grands Rhétoriqueurs, for instance, were writing similarly expandable poetry as early as the fifteenth century (Jean Molinet's
distant readingthat is better equipped to explicate such computer-inspired poetry.
Queneau’s
This text is inspired by the presentation of instructions destined for computers or rather programmed teaching. This is an analogous structure totree literatureproposed by F. Le Lionnais at the 79th meeting [of the Oulipo]
Ce texte…s’inspire de la présentation des instructions destinées aux ordinateurs ou bien encore de l’enseignement programmé. C’est une structure analogue à la littérature « en arbre » proposée par F. Le Lionnais à la 79e réunion.
Queneau’s story initially gives this reader a choice between the story of
three little peas, three big skinny beanpoles, or three average mediocre
bushes. The choices are all binary, and mostly stark oppositions. For
instance, either the reader chooses to read the tale of the three peas or he
or she does not. Should the reader prefer not to read this first option, he
or she will find that the two alternatives offer meager results. Refusing
all three terminates the program almost immediately. If the reader chooses
the proper beginning and advances in the tale, the first few choices of the
story allow the reader to have a say in descriptive aspects of the story —
whether or not the peas dream, what color gloves they wear as they sleep in
their pod, and whether or not they roll around on a highway first. In many
cases, pointless alternatives are either offered to the reader in an effort
to convince him or her of his or her autonomy (the alternative descriptions)
or as unsatisfying dead ends (the false beginnings). Regardless of the path
the reader takes, there is only one If not, go to 15 anyway,
because you won’t see anything
si non,
passez également à 15, car vous ne verrez rien.
In short, this
Starynkevitch never had the opportunity to program
First, the computerdialogues with the reader by proposing the different choices, then edits and cleans up the chosen text without the questions. The pleasure of playing and the pleasure of reading are therefore combined
L’ordinateur, dans un premier temps,dialogue avec le lecteur en lui proposant les divers choix, puis dans un second temps, édite ‘au propre’ et sans les questions, le texte choisi. Le plaisir de jouer et le plaisir de lire se trouvent donc combinés.
The computer is reductive in terms of reader interaction. While Paul Braffort
published one such
At this point in the project, the
At the end of the program, the author had an example where the user could
make a program that allowed a reader to move through a
Cliff introduced me to graphviz, an open source graph
(network) visualization project that is freely available for Python. Given
my background in mathematics and my research on graph
theory, I felt immediately at ease with the way this program
operates. In graph theory, a graph is defined as a plot of nodes and edges. In graphviz as
well, in order to make a graph, I had to define all
requisite nodes and edges. The program then generates a visualization of the
graph that is spatially efficient. As an exercise, I made a graph of the famous graph
theory problem of the
With this graph theoretical program in my Python
arsenal, I was able to make my own graph of
Unlike Queneau's original graph, mine does not depend on my own aesthetic
preferences or interpretation of the text. The vertical layout, determined
by graphviz's spatial constraints, seems to have — without any knowledge of
the content of the nodes — understood something fundamental about the
structure of Queneau's tale. This representation of the graph clearly
demonstrates that there is only one
This is an odd, almost disconcerting outcome of my project that seems to confirm many criticisms of digital humanities scholarship. If reading the text is unnecessary to its interpretation, then is this not an abject refusal of traditional humanities work? I would argue that this method does not necessarily supersede a traditional close reading, but rather provides a legible visualization of the potential of similar graph theoretical texts. Any choose-your-own-adventure story is composed of nodes and edges which can be programmed — as I have done — using graphviz. Indeed, my program can be adapted very easily to create an interactive edition and graph of any such story. I would encourage digital humanists who wish to try their hand at exploratory programming to apply this method to more complex works of which a traditional close reading might be obscured by the number of nodes and edges.
The S+7 method was invented by Jean Lescure and
immediately gained popularity in the early Oulipo, most likely due to its
precise definition, simplicity of execution, and quick and often hilarious
results. Proposed on one of the first Oulipo meetings on 13 February 1961,
it enjoys a privileged position as one of the group’s first official
constraints S+7 begins with a preexisting text, locates all
the nouns (S stands for substantif, or noun) and replaces them with the noun
that comes seven entries later in a dictionary of the author’s choosing.
Early Oulipian publications such as the first collected volume,
Lescure’s article detailing the method claims that since the S+7 operator is a purely mechanical function, the
results also depend upon the chosen text and dictionary used S+7 owes its entire effect to the structure
of the original text and the nature of the dictionary and it is therefore
recognizable syntactic structures that become the most humorous when nouns
have been swapped out with others that are alphabetically not too far away.
For instance, here are some English-language S+7
exercises produced by an online generator
These are funny precisely because the original texts are so recognizable. The syntactic structure coupled with a few surprising substitutions in vocabulary derail the reading experience and help us understand that nothing is sacred — not even great literary classics or the Bible.
The Oulipo’s further work on the S+7 dealt more with
variations on the genre. Mathematically speaking, Le Lionnais pointed out
that the S+7 is a more specific version of M±n, in which M represents any taggable part of
speech and n represents any integer value
The Oulipo did experiment with some recognizable texts early on, once again
in the context of their collaboration with Starynkevitch. In January 1963,
Dmitri Starynkevitch was an invited guest at a meeting. Lescure insisted
that he make a program for producing S+7’s on the CAB
500, and Starynkevitch spoke of the difficulties that arise when programming
S+7 given the lack of a dictionary. For a human
author, applying the S+7 method is tedious. For a
computer of the 1960’s, it was virtually impossible. Starynkevitch explains
such difficulties: All the difficulty obviously
comes from the amount of material you need — that is, to introduce
into the machine. It is currently possible for us to work with a few
sonnets of pages of text. (That is what we did for the
S+7), that is currently impossible.Toute la
difficulté vient, évidemment, de la quantité de matériau dont vous
avez besoin — donc : à introduire en machine. Il nous est possible
de travailler sur quelques sonnets ou quelques pages de texte.
(C’est ce que nous avons fait pour les
Eventually, they proposed to correct various issues by hand. By the following
meeting, Starynkevitch had programmed the S+7 method
and sent Queneau examples of the program applied to the
These early examples were never published, and the S+7
computer experiments were never mentioned in any publications. My suspicion
is that they were displeased with the S+7 computer
program for two main reasons, despite the fact that the procedure itself is
ostensibly purely algorithmic and easily applicable to literal computers.
First, while early computing was able to produce S+7’s, so much was done by hand that the Oulipo very quickly abandoned
the idea of using computers to do S+7’s. What is the
point of automating a procedure if such an automation requires the creation
and perforation of a noun dictionary that has no other purpose?
Additionally, correcting gender discrepancies that the computer could not
understand was likely just as time-consuming as doing the entire thing by
hand in the first place.
While the A.R.T.A. project focused more on strictly combinatorial
productions (where there were a fixed number of elements that could be
recombined as the user wished, as in the case of Marcel Bénabou’s aphorisms
S+7. In July 1981, a conversation between Paul
Braffort and Jacques Roubaud (a mathematician and poet, another member of
the second generation of Oulipians) saw the birth of the
My last digital annex is much more modest in scope, consisting only of an S+7 generator. Since this is such a canonical
Oulipian technique that is often mentioned but very rarely analyzed, my hope
was to provide the reader of my chapter 2 with a program to generate his/her
own S+7’s. In this way, when I claim that well-known
texts produce more comedic effects, the reader can generate an S+7 of a well-known text and a lesser-known one in
order to enrich my analysis with more examples. I had also hoped that such a
program might be able to provide multiple dictionaries so that the reader
can experiment with the effect of different types of dictionaries on the
procedure and resulting texts. A final outcome was supposed to be the
ability to subtract 7 from previously written S+7’s,
allowing the reader to confirm whether or not an Oulipian had faithfully
produced an S+7 without cheating. While my chapter 2
includes a diverse and varied selection of texts and procedures, only some
of which are substitution-based such as the S+7, my
annex would aim to provide one small, interactive example of the type of
potential literature the early Oulipo was examining.
Since my other annexes dealt exclusively with original texts written in French (Queneau’s
This annex provided an excellent excuse to acquaint myself with Natural Language Processing using the NLTK’s textbook
and programs (https://enaming.com/exclusive/domain/nltk.com/). While I could
easily tag the nouns in my chosen texts manually, such laborious work would
inevitably produce the same dissatisfaction as early Oulipian computer
experiments with Starynkevitch. Nevertheless, my first step had to be to
create dictionaries of nouns using nltk. Theoretically, what Cliff and I
have envisioned allows the reader to produce dictionaries with specific
vocabularies which makes for more interesting S+7 variants. The first noun
dictionary I produced came from an online edition of Edgar Allan Poe’s
complete works. An S+7 of the
Devising this program has forced me to consider the S+7 from a critical perspective, rather than a purely inventorial
one. While I had initially included the S+7 as a
basic example of an arithmetical Oulipian technique, I now see it as a first
attempt in a long line of Oulipian research on productive substitutions.
These valuable insights, while not necessarily unique to digital humanities
work, were facilitated and brought to light as a direct result of this
project. The collaborative nature of the project is one reason: discussions
with Cliff Wulfman who is not only a literary scholar, but also a computer
programmer, enabled both of us to understand this procedure simultaneously
as inspired by early computers and algorithmic methods.
Studying and implementing natural language processing with nltk allowed me to better understand the Oulipian mindset that spawned these early procedures, elucidating the historical context of why the group ultimately abandoned these efforts. The early Oulipo, inspired by the potential of computer programming, demonstrated an algorithmic approach to literature in the early constraints the members produced. My programming experiences helped me to understand the essential difference between these types of algorithmic procedures and the work that the group eventually settled on following the members' dissatisfied responses to the computer experiments of the 1960s and 1970s, a clear preference for abstract mathematical thought — patterns and structure — rather than the procedural tendencies of applied mathematics.
Finally, the creativity required to carry out such a project brings a new freshness to my research and analysis that I hope will enable it to say something truly new and interesting about what appears to be a trivial, superficial method meant to produce one-off silly results. While in this case, my code is likely not adaptable for other digital humanities projects, I do believe that future scholars can benefit from a greater understanding of nltk in order to learn more about other algorithmic production that coincided with the development of computational linguistics and early humanities computing in the 1960s and 1970s. An understanding of nltk has the additional benefit of promoting a certain kind of Oulipian creativity in literary analysis, allowing a digital humanities scholar to invent a code that can analyze texts, completing this fascinating loop of research and invention.
While the official goal of this project had to be to produce a product, the
That said, as with Oulipian computer-texts that lost their interest when programmed onto a computer, digital tools are best when properly understood and critically implemented, at least with regards to Oulipian work. Creating a computer program of Queneau’s
Beyond the narrow use of digital humanities methods in Oulipo studies, this experience has convinced me that the broad nature of the digital humanities and how the field defines itself is productive. Indeed, the nebulous nature of definitions of the digital humanities allows for a great variety in approaches and can foster creativity. Especially for literary studies, such creative approaches are perhaps most appropriate.