Deena Engel is a Clinical Professor as well as the Associate Director of Undergraduate Studies for the Computer Science Minors programs in the Department of Computer Science at the Courant Institute of Mathematical Sciences of New York University. She teaches undergraduate computer science courses on web and database technologies, as well as courses for undergraduate and graduate students in the Digital Humanities. Ms. Engel also conducts research and supervises undergraduate and graduate student research projects in the Digital Humanities. She holds Master’s degrees in both Comparative Literature and Computer Science
Dr. Marion Thain teaches literature and the liberal arts at New York University, and is Associate Director for Digital Humanities for the Faculty of Arts and Sciences (website: https://sites.google.com/a/nyu.edu/marion-thain/). Her research is primarily in the areas of: aestheticism and Decadence; British poetry and poetics; the Digital Humanities. She has published five books and many essays and journal articles, including:
This is the source
Co-teaching a digital archives course (ENGL-GA.2971) for graduate students in the English Department allowed us to bring together our expertise in both research and pedagogy from two fields: English Literature and Computer Science. The course built on a core pedagogical principle in Computer Science of teaching through projects rather than from unrelated one-off programming or web development assignments. Teaching the Text Encoding Initiative after students had completed hands-on projects (using xHTML, CSS, and a digital archive working in a standard content management system) enabled the building of technological skill sets in a logical and complementary manner. From a literary perspective, building a digital archive — and teaching text encoding — enabled an in-depth consideration of textual materiality, the processes through which literary scholarship must inform technological building decisions, and the ways in which the act of digitization can be used to ask new questions of the text (or to prompt the text to ask new questions of itself). This paper will survey our techniques and approaches to interdisciplinary teaching, culminating in our usage of text encoding for exploring issues of textuality through digital presentation.
Techniques and approaches to interdisciplinary teaching for a graduate digital archives course.
This article reflects on the practice of designing and implementing a course that aimed to teach graduate students in English the skills to build a scholarly online digital archive from primary source materials. A founding principle of our pedagogical practice is the integration of CS and DH methods. As a still fairly young pedagogical field, discussion and sharing on this topic might, we hope, be timely and useful to others. This course was part of an evolving response to the key current pedagogical challenge of teaching graduate students in English how to synthesize a new set of technological skills with literary expertise to discover new ways of working and new questions to ask of their texts.
The ultimate ambition of our course was for each student to digitize text otherwise inaccessible beyond the material archive. As such the focus was on dealing with manuscript materials, marginalia and other editorial markings on typescripts, and materials that had a particular material instantiation. The students were free to choose their focal materials to reflect their own interests, and had access to the manuscripts collections and Fales’ special collections in the Bobst Library at New York University. The resulting digital archives included, for example, collections of nineteenth- or twentieth-century letters by well-known writers; manuscript drafts of what went on to become well-known literary texts; type-script drafts of literary essays; and even a collection of contemporary hand-made ‘zines’. Each archive included images of the documents, transcriptions, TEI encodings, contextualization through further reading lists, and prose narratives about their materials and their digital archival practice.
New York University’s Department of English first offered a course on creating online archives from primary source materials in the fall, 2011 semester under the course number and title
The course under discussion in this article,
From a Computer Science perspective this course sought to implement and contextualize four current goals and trends in Computer Science education. Following is a brief description of each of these goals with references for further reading.
(1) Project-Based Learning (“PBL”):
Project-based courses are common in
computing education. As a method within the
problem-based learning toolkit, projects can be used
at any stage within a degree program to explore
alternative and often more complete solutions to a
given problem allowing the theory to emerge as
necessary.
Students who drop out from a course
do not have the opportunity to learn all the
material and transfer it. If students understand the
usefulness of what they are learning, they are less
likely to give up.
There is a current trend in Computer Science pedagogy to
encourage project-based learning
(PBL) in place of
traditional
one-off programming or development
assignments.
(2) The Role of Web Development in Computer Science Pedagogy:
Web development can provide a rich context for exploring computer science concepts and practicing computational creativity.
Web development has become a serious area of study for
undergraduate Computer Science majors
(3) Inclusion and Computer Science Pedagogy:
Supporting a workforce that can create, not simply consume,
computing technology requires a shift in pedagogy toward
problem solving in a gender neutral, culturally and
ethnically diverse community.
There is an acute awareness within the Computer Science
pedagogy field that education in Computer Science and
technology must consciously and deliberately address issues
of inclusion so that women and minority students as well as
disabled students should have full access and encouragement
to participate on a there is no such thing
as a dumb question
and that web technology is constantly evolving and the trick is
to learn how to frame the question
helped to
assuage the students’ fears.
(4) The Development of Computer Science Pedagogy Within Inter-disciplinary and Cross-disciplinary Areas of Education:
Computer science holds a unique position to craft
multidisciplinary curricula for the new generation of
faculty and students across the academy who increasingly
rely on computing for their scholarship.
Computer Science departments must continue to meet the needs
of other departments and collaborate both on University
pedagogy goals and with research
Our course was structured so as to begin with connected but
distinct issues in literature and technology that ran in
tandem, but to work towards the integration of the two sets
of considerations. This integration culminated in the final
section of the course on the Text Encoding Initiative (http://www.tei-c.org/index.xml). From the
perspective of the literature professor, then, the overall
pedagogical aim was to provide a layer of commentary on the
course activities that encouraged students to reflect on all
the technical decisions they made in building their sites as
themselves potentially acts of interpretation of the text
they were representing. Through looking at recent debates
around textual interpretation in literary studies, it was
possible to make connections that enabled students to
consider how their own practice in digital reproduction
might be intersecting fundamentally and significantly with
issues of textuality. This goal was one of teaching the
students how to think in ways that confront the deepest
issues of interdisciplinarity, and to avoid the natural
compartmentalization of skill sets. We maintained the
specificity of the issues within each disciplinary field
while showing the points of intersection by role-modeling
debates between the two teachers. With the main outcome of
the course being the online digital archive, we decided not
to ask the students to write a separate reflective
essay-commentary on their building of those sites, but to
ensure that this reflection was built into the site through
their layout, the choices they made, and through the prose
in their About
sections.
Also from a literary perspective, it was crucial to the course that there was the potential not just for a growth in technological skills informed by considerations drawn from the humanities, but a growth in awareness of textual features through attending to the literature in a new way. In other words, the act of digitization became within the course a new way of being sensitive to the features of the text for literary, critical, and interpretative purposes. The potential for this process to work in two disciplinary directions simultaneously was particularly interesting pedagogically because the students came to the course with varied levels of expertise in technology and in literary study. The humanities skills the students developed through the course were those of the literary critic (particularly in relation to textual materiality), the literary-historical scholar, and the editor. Depending on each student’s previous experience and their comfort with literature and technology, respectively, the opportunities were different, but for every student the objective was a symbiotic relationship between the two, with the digitization process prompting new engagement with the texts they had chosen, and for the resulting considerations to then go on to inflect the way they used the technology within their site.
While this article focuses around the collaboration of the two professors who designed and taught the course, it should be noted from the outset that the course benefitted enormously from the involvement of a broader team. The librarians at the Bobst Library, and particularly those working within Fales (Charlotte Priddle, Lisa Darms, and Amanda Watson), provided a session for our students teaching them how to use special collections and how to handle its materials. Melitte Buchman from the Bobst Digital Studio (http://dlib.nyu.edu/dlts/) worked with our students both as a group and individually to help them photograph their materials and understand the principles and practices of imaging. The Systems Group within the Courant Institute of Mathematical Sciences (CIMS) provided the technological resources required to host and support the students’ projects. Working within this broader term not only helped better support our students but also offered something in return, as our students provided the library with high resolution tiff images of the materials they had scanned.
This course met twice each week: once in a traditional classroom lecture and discussion format; and a second time in a multi-media lab for hands-on work. Both instructors attended all of the lecture classes; in addition, the Computer Science faculty member provided lab sessions, and the English faculty member offered consultation hours immediately following each lab session (in practice, there was much collaboration between the two professors in each of these additional strands).
Every student was given an account on a production webserver (http://cims.nyu.edu/webapps/content/systems/resources/i6). The Courant Institute provides two servers dedicated to teaching purposes that are used throughout the Computer Science department: a web-server for students to host their sites and a database server running MySQL so that students learn to create and manage their own databases. Students also worked in the Digital Studio (http://nyu.libguides.com/digitalstudio) and ITS Multi-media lab which provided scanning equipment and appropriate software for image manipulation and web development.
The course was divided into four units. A list of the technical skills which were taught in each unit is followed by an overview of the pedagogical practices. The first three units each entailed a project-based assignment which is described below as well.
In this unit of the course, we covered skills and topics typically taught in a standard introductory course to web design. These topics include:
The project assignment for this unit consisted of a website of a minimum of two pages rendered in HTML5 and CSS3 to describe an author of the student’s choice and publish two or three selections of his or her work.
In this unit of the course, each student installed his/her own WordPress (http://wordpress.org) site to the teaching webserver. The Computer Science faculty member, in consultation with the English Department, decided to use WordPress in this case as Omeka was not supported on the server we used and we believed Drupal is too complex for an introductory class taught within one semester for this student population. As each student installed his or her own WordPress site, they had the opportunity to learn aboutWordPress as an example of a CMS and will be able to carry this knowledge to future CMS projects.
Specific topics in this unit included:
The assignment for this unit of the course consisted of their first draft of an online digital archive based on primary source materials; content included text and images.
Based on the principle in Computer Science pedagogy of
building skills on a strong foundation, we introduced XML in
the final development unit of the course. In this unit of
the course, we sought to build on the students’
understanding and experience with HTML5 as a mark-up
language in order to contextualize and introduce XML
Students were introduced to the syntax and structure of XML as it is used in its narrative form for descriptive meta-data; students in this course did not use XML to generate stand-alone datasets. There are significant differences between XML and HTML; the significantly greater flexibility of XML was presented to the students as a means to better annotate and analyze their literary texts. Students already understood that HTML and CSS separate the content of a text from the infinite variety of possible output formats. In this course, we discussed XML implementation to further describe and define the underlying literary structure of a text (e.g. a play consists of acts; and each act may consist of one or more scenes while a novel containing chapters would be encoded differently in order to capture the underlying structure of prose). We further introduced the flexibility and variability available in XML languages to capture meta-data; specifically using XML tags standardized in the literary scholarly community through the international efforts of the TEI Consortium and related projects. The two professors worked closely together to pick a series of examples for the first lectures on TEI/XML in order for the students to make the transition and see the value of XML over plain text or HTML.
For web presentation of the TEI documents to accompany their websites, students were offered the choice to use CSS or TEI-Boilerplate (http://dcl.slis.indiana.edu/teibp/index.html). XSLT was not taught formally in this class due to the concern that this is a more advanced topic which could intimidate students at this level, especially in light of the complex literary texts that they selected. However, the Computer Science instructor introduced XSLT briefly at this time in the course so that the students would understand the importance of it in web publishing and its wide usage in the TEI community. Several students expressed an interest in XSLT workshops such as those conducted by the Women Writers Project (http://www.wwp.northeastern.edu/outreach/seminars/).
Preservation is a very important topic with respect to digital archives from both a literary and a technical perspective. In the final class, we discussed technological aspects of the students’ sites which would require modification for permanent online publication; and how to render a finished CMS site into a static HTML/CSS site for permanent publication (this discussion centered on a plug-in currently available at http://wordpress.org/plugins/static-html-output-plugin/. Note that this plug-in required modification by the instructor in order for the students to use it.). The students understood that such a transition would eliminate any future issues of upgrades to PHP running on the webserver in general and to future versions of WordPress in particular; as well as the value of rendering the site easier for the Systems administrators to support as MySQL would no longer be needed to keep their sites running after such a transition.
Interwoven with this technology syllabus was a set of
considerations led by the literature professor. This process
began at the start of the semester through a discussion
about why we might want to digitize texts and what might be
gained; what our priorities should be in deciding which
texts to digitize; which texts might be most at risk of
being lost to us or inaccessible if not digitized; and what
might be the consequences of digitizing text. This class
discussion generated a humanities agenda (of aims and
concerns) to which we returned throughout the course in
relation to our practice. The following two weeks were
devoted to analyzing existing online literary archives:
first a few chosen for collective group attention,
For the literary component of this first section of the
course, essays such as Kenneth M. Price’s piece on
Electronic Scholarly Editions
were used to
introduce many of the key concerns
Teaching the students about metadata was a crucial part of the first half of this course and the topic was introduced, from the literary-studies perspective, through the scholarly debate around textual materiality. The technical information about metadata standards was provided through the online pamphlet —
The benefit of approaching the issue of metadata through
literary scholarship around textual materiality was that it
enabled a much deeper engagement with the issues of creating
a digital representation. These would not have emerged had
we simply instructed the students of the need to record, for
example, the particularities of the place, date, and
publication of the particular edition they were digitizing,
or the physical location and call number of the manuscript
they might be working with. The discussion around the
importance of issues of materiality to the interpretation of
text brought to light all kinds of other aspects of the
physical text that might be important in representing it in
a digital format: both through metadata and through the
photographs and scans they were to create. For example, we
discussed the importance of giving the online reader images
of the cover and front-pages and end-pages of a text, and
considered the ways in which sites that had not done so
might limit the questions scholars could ask of the texts
they represent. In light of recent scholarship on the
history of paper, we also considered the potential
importance of giving information such as the type and weight
of paper used as part of the metadata, and the significance
of such information to certain kinds of scholarly questions
(see
Issues of materiality central to the course were given an
interesting twist through one student’s focus on
contemporary zines
. Often created by hand rather than
on a computer, and exploiting a rawness that appeared to
oppose the aesthetics of new technologies, the students were
encouraged to think about how these pieces posed interesting
theoretical, and sometimes practical, questions for the
process of digitization. One zine in particular, which was
folded in such a way as to give the potential for many
different views of its content depending on how it was
unfolded, exploited the three-dimensional possibilities of
paper in space. Although with enough separate images from
different angles (or within a video or animation), these
possibilities could be captured and uploaded to the digital
archive, the zine was clearly designed to challenge the
two-dimensional textuality of both the traditional codex
technology and digital textuality. The introduction of
literary-theoretical frames for thinking about the insistent
materiality of these items led the student to write to the
authors with questions both about their intentions and to
ask permission to digitize the zines. The replies were often
revealing of a reaction against digital culture and can be
summarized with reference to the
MORE ZINING, LESS BLOGGING(http://8ballzinefair.com/ABOUT). By encouraging the group to think about what it means to digitize items that were created (in part, at least) in opposition to digital culture, we were able to provide a particularly interesting and challenging perspective on the processes the students were undertaking.
This discussion about the material instantiation of text (in
relation to very varied examples from the nineteenth to the
twenty-first century) led by the literature professor was
part of the preparation for students to go into the
university library’s special collections department to
choose the materials they wanted to digitize. Attending a
session organized by the library staff, the students learned
about the collection, and how to handle its materials. The
library staff also covered copyright issues that continued
to play an important role in class discussions throughout
the course as permissions were clarified, sought and
granted. The following weeks of the course focused, from the
literature perspective, on helping the students to choose
their texts; and culminated in using one of the classes to
present their materials and discuss them with the other
students in the class. At the presentation the students were
asked to give a rationale for their choices, including
thoughts on the textual, critical/historical and technical
significance of their selection. This discussion, in a
roundtable format, was particularly useful because the
students had chosen very different kinds of materials to
digitize: sharing their work with each other enabled them to
learn from the very different challenges they were each
facing, and to broaden their understanding beyond the issues
their own materials posed. Over the next couple of weeks,
when the class time was devoted primarily to the technical
issues of building their sites, it was possible to ensure
the
We brought all of this back into class again in week eight, when almost the whole session was devoted to another literature-led roundtable with each student presenting their site under construction and explaining in ten minutes why they had chosen the proposed site architecture and to give a rationale, from a literary and scholarly perspective, for the technical choices they had made. They were asked to consider not just how their materials could best be presented, but also who their intended audience was and how a desire to be useful and relevant to a broad audience might inflect the design of their sites. At this point the students were asked to consider what kind of interest their texts might hold within current scholarly fields of enquiry and how to connect with and aid those users, but also how they might make their sites sustainable in a literary, rather than a technological sense, by keeping their materials open as far as possible to the kinds of questions future users might want to ask of them that we cannot now necessarily predict. As an example, introducing archives that had not digitized the advertisements at the back of the nineteenth-century periodicals they represented, we discussed how such materials were now considered of great interest to scholars both in their own right and as context for the articles that had been digitized. All the students reported making changes to their sites as a result of the reflective process they undertook in presenting their rationale to others. Listening to each others’ presentations and discussing them afterwards also helped significantly in sharing good practice and introducing new possibilities for each project. At the end of each presentation the student was asked what they might enable through their digitization of the text that could not be done before. Here they were asked to go beyond the more obvious answers that related to preservation and access to think about how their digitization strategy enabled the texts to ask new questions of themselves; the ensuing discussion was particularly formative for shaping the next phase of site building the students undertook in the course.
The second half of this session was devoted to a round-up of the challenges students were facing with their projects from a literary, rather than technological, perspective. The agenda here was determined in key part by the problems, and solutions, that had arisen with students individually in the lab sessions (which often highlighted more general issues they should all consider). The agenda included a discussion of issues around the principles of manuscript transcription. Here the students were invited to consider the well-worked out transcription statement represented in a site such as the Bentham letters (given there in full to aid crowd-sourced transcription) and how it might help them develop principles for their own transcription (http://blogs.ucl.ac.uk/transcribe-bentham/). Editorial experience was also used to encourage the students to think about the significance of recording uncertainty, introducing the need to balance a natural desire for definitive and interpretative transcriptions with a strategy that left open possibilities for other meanings that might become relevant in the future. Also raised here was the general, and related, issue of fidelity versus clarity/usability, and clutter versus functionality, to help the students think about the challenges they were all facing in deciding how much detail to present and how many different functions to build into their sites. The possibility of offering different views of the material, so a reader could choose either a simple or more complex view and functionality, was a particularly useful point around which the professors could role-model the interplay of technical and scholarly considerations. Another major topic at this point was the use of visualization and mapping to enable the students to present data in new ways. With one student using a map to locate manuscripts geographically, we were able to introduce the idea of using visualizing more generally in other, non-geographical, ways to help get more out of their data. For example, another student had been wondering which of the many and varied published editions of her novella to provide alongside the manuscript version she was digitizing; one solution discussed with her was to use the software tool Juxta (http://www.juxtasoftware.org/) to show the relationship between the manuscript and various different published editions, showing visually the areas of greatest similarly and difference between them.
This process of reflecting on the literary issues at stake in their building procedures ended with the students being asked to imagine what they might enable their users to do with their materials in an ideal world — and what the literary or scholarly value of those digital experiments might be. This was the basis for a discussion of how the course instructors might be able to help the students implement some of those ideas in a realistic form within the time and resources available. For example, a student who was interested in large-scale crowd-sourced transcription created a game-inspired transcription quiz in the form of a WordPress blog. She used this to address some of the challenges of legibility posed by the author’s hand-written marginal comments in the type-written manuscript she was digitizing. Another student who was digitizing an archive of mid-twentieth-century letters written from New York City plotted the addresses from which they were sent on a historically accurate map of the city. She used KML (a mark-up language based on XML for use with Google Maps) to identify the geographical positions, and then superimposed a map from the archives of the New York Public Library over the Google map to make the mapping historically relevant. Adding a geographical component to her work enabled her to analyze the importance of the location of writing to its content — a process that revealed interesting connections.
The course culminated in teaching encoding to the students so that they could add a further layer of information to their sites, experimenting with what the TEI might be able to offer their projects. First we used the
Following on from this class teaching the language of the TEI, the literature faculty member led another session focusing on the literary and critical significance of encoding. The chapter
Having worked through the sonnet in detail, we then gave them the poem
In Love’s rubber armor I come to you’) but the subsequent thirteen
linesof the sonnet are indicated purely by a letter marking the position on the page where each line would end. The letter given to mark each line ending crucially also represents the strict rhyme scheme of the sonnet. From a technological perspective, the aim of this exercise was to get the student to produce an XML encoding, but from a literary perspective it was also to ask students to think about the act of encoding and to recognize the poem as itself in some ways a challenge to some of the structuralist assumptions upon which TEI is founded. The poem is a post-modern parody of the conventional nature of love poetry, gesturing towards the generic, iterable and predictable nature of the romantic sonnet. As such, the poem is a gesture towards a sonnet rather than a sonnet itself, and in this way it raises questions about the kinds of classifications on which encoding is based. The poem has fourteen lines, and the rhyme scheme one would expect from a Shakespearian sonnet, but does that make it a sonnet, and should it be identified in that way in the TEI? What happens if we try to encode this poem in the way we tagged the Shakespeare sonnet? What questions does this ask of the poem, and what might it miss or misrepresent? To encode this poem as a sonnet is to explain its own joke in a rather heavy-handed way, but the joke is also in some sense on the process of encoding, which the playfulness of the poem seems to evade. Looking at this poem with the students aimed, then, to explore and question in practice some of the issues that Jerome McGann identified when he wrote that because the TEI
treats the humanities corpus – typically works of imagination – as informational structures, it ipso facto violates some of the most basic reading practices of the humanities community, scholarly as well as popular
It was in light of this reflective engagement with the TEI — on its possible limitations and problems as well as its benefits — that we asked the students to consider TEI in relation to their own projects. The key question was what problems could TEI solve for them, and what could it offer the user of the site that was currently not otherwise possible. The work we did in class on encoding various genres of literary text complemented the work we did with the students in the lab session which focused primarily around their own project work with manuscripts. One key area for discussion here was the potential for the TEI to render searchable the textual-content captured and presented primarily through images. The students’ work with hand-written manuscripts presented particularly interesting possibilities here, in relation to providing options for the interpretation of complex and problematic script, and for presenting issues of manuscript materiality. For example, a project digitizing nineteenth-century hand-written letters, in which additions were often placed in the gaps on the page, prompted class debate on how best to render this text and useful discussion about the complementarity of manuscript image and text encoding. In addition to transcription, the students also used the TEI to note additional contextual information as well as the usual metadata. For example, the student who digitized letters written from various locations in New York City used the TEI to encode the geographical data. Another used the TEI to add information about the hand-drawn images that appeared in her text, thus enabling a further explanatory and interpretative commentary that altered the way the text would be read.
The structure of the assessment reflected the structure of the course more generally in that we began with smaller separate technical and literary assignments and then brought the two together in the main project. The first two assignments (building a small website for the purposes of studying HTML and CSS; and evaluating existing on-line text-archive sites) were assessed by the Computer Science and English faculty members respectively. The final online digital archive and the TEI-encoded literary texts were assessed by the two faculty members together. The course ended with the students formally presenting their sites to the group for the final assessment in order to maintain the group dynamic and the sense of peer-sharing which had been so important throughout the course.
Just as the Digital Humanities is an emerging field, so is the pedagogy associated with it. Case studies such as those described in
The principles of our collaboration might be summarized under four points. The first being the preparation we did before the course: planning how we would interconnect the concerns of our respective disciplines conceptually and practically in relation to the structure of each class. The second being the weekly meetings we had to monitor and adjust the course content and pedagogical approach. The third category relates to the differences in disciplinary teaching methods. Our collaboration involved an integration, or alternation, of the seminar — or roundtable — discursive style of the humanities with the more lecture-driven style of Computer Science complemented with lab sessions. Fourth, and finally, we sought to role-model interdisciplinary collaboration in various ways throughout the class, showing the students that the same question might have two different answers: one technical and one from a scholarly humanities perspective; we then pursued the connections through discussion with the class, and through discussion with each other in front of the class. While at the start of the course we presented the introductory technological skills and the key humanities questions separately, the goal was to achieve convergence of these perspectives over the length of the course by teaching the students to see every technical decision they made in building their sites as an interpretative gesture (and to see their interpretative and critical analyses as having a technical embodiment).
Various problems and challenges arose as we taught the course. For example, the imaging of the primary materials occurred early on in the semester and generated a great deal of enthusiasm that resulted in students imaging more material than they could reasonably work with; in the future we would ask students to think more carefully at this stage to limit the focus and give more scope for the qualitative rather than quantitative work. We also ran into problems with the ever-changing technological environment. For example, Google was phasing out support for KML, which required a new solution to be found for one student’s mapping work; and TEI-Boilerplate turned out to serve our purposes less well than expected in the CMS environment. In terms of pedagogical approaches, we adapted the course as it progressed to fit the needs of the students; in particular, we worked to build in greater opportunities for discussion and sharing information between the students and their respective projects. In addition, once we began working with the students on their individual projects, we decided to use the particular challenges they were facing with their materials as our examples in the class lectures.
We have engaged with other departments to see how the course might be modified to become suitable for a broader range of humanities students. Our initial assessment suggests there would be ways to develop the course to cater to students from a range of disciplines in Literature and History. For example, introducing EAD (Encoded Archival Description: http://www.loc.gov/ead/) along with TEI as examples of XML languages would be useful for history students and complement work done by literature students and history students using TEI with manuscripts and documents. The addition of an historian to our teaching team would be essential in enabling us to expand our interdisciplinary conceptualization and theorization of the online archive in conjunction with this expanded technological syllabus. Having taught the course once we are also considering the following changes for the second iteration: the addition of a lecture on XSLT formatting the TEI examples used earlier in the class; and the addition of more reading and discussion on the theory of TEI.
Finally, we have found that teaching collaboratively enabled us to think about new research practices. This connection has not always been obvious:
From the outset, it was clear that teaching together in connected courses would be good for our students. What was not so clear was how good it would be for the faculty’s professional development and research output.
Yet, as we worked together, we better appreciated the overlay of our fields of study, and ideas for new research collaborations emerged. Digital Humanities is an area in which collaboration can be particularly valuable to ensure a depth and breadth of both technical and conceptual knowledge, and through our collaboration we felt able to explore profound methodological intersections between the fields of technology and the humanities.
(http://cs.nyu.edu/courses/fall13/ENGL-GA.2971-001/DT_index_fa13.php)
The interface of technology and the humanities represents a key to the future, yet many students feel they lack the skills to access this potential. This course offers an introduction to web development and digital publication especially created for students in the Humanities, with a view to equipping you with knowledge foundational for reflective engagement with the new media of literary creation and dissemination. Students will survey the principles of current technologies and apply them through practice as they learn the skills and techniques for formatting and publishing archival materials in a web-based environment. The course builds towards the creation of a digital edition, giving you the opportunity to work with primary source materials available through NYU's rich archival collections (these include a wide variety of printed texts, manuscripts and images from which to select according to your interests).
The course will consist of a traditional classroom lecture and discussion format as well as computer lab sessions to promote and assist students as they work on each of their three projects in this course. Each student will have an account on a production webserver to post their work and learn to install and configure a WordPress site specifically tailored to his or her primary source materials. Topics and assigned projects will begin with an introduction to mark-up languages and building a site of related web pages followed by a project centered on encoding and annotating digital texts for scholarly purposes. The final project involves photographing or scanning, transcribing, and encoding digital texts to build an online archive.