Current Issue
Preview Issue
Previous Issues
Preview Issue
Previous Issues
- 2023: 17.1
- 2022: 16.4
- 2022: 16.3
- 2022: 16.2
- 2022: 16.1
- 2021: 15.4
- 2021: 15.3
- 2021: 15.2
- 2021: 15.1
- 2020: 14.4
- 2020: 14.3
- 2020: 14.2
- 2020: 14.1
- 2019: 13.4
- 2019: 13.3
- 2019: 13.2
- 2019: 13.1
- 2018: 12.4
- 2018: 12.3
- 2018: 12.2
- 2018: 12.1
- 2017: 11.4
- 2017: 11.3
- 2017: 11.2
- 2017: 11.1
- 2016: 10.4
- 2016: 10.3
- 2016: 10.2
- 2016: 10.1
- 2015: 9.4
- 2015: 9.3
- 2015: 9.2
- 2015: 9.1
- 2014: 8.4
- 2014: 8.3
- 2014: 8.2
- 2014: 8.1
- 2013: 7.3
- 2013: 7.2
- 2013: 7.1
- 2012: 6.3
- 2012: 6.2
- 2012: 6.1
- 2011: 5.3
- 2011: 5.2
- 2011: 5.1
- 2010: 4.2
- 2010: 4.1
- 2009: 3.4
- 2009: 3.3
- 2009: 3.2
- 2009: 3.1
- 2008: 2.1
- 2007: 1.2
- 2007: 1.1

ISSN 1938-4122
Announcements
DHQ: Digital Humanities Quarterly
2023 17.3
Categories in Digital Humanities
Editors: Dominik Gerstorfer, Evelyn Gius, and Janina Jacke
Front Matter
Working on and with
Categories for Text Analysis: Challenges and Findings from and for Digital Humanities
Practices
Dominik Gerstorfer, Technische Universität Darmstadt; Evelyn Gius, Technische Universität Darmstadt; Janina Jacke, Georg-August-Universität Göttingen
Abstract
[en]
This is the editorial of the sepecial issue “Working on and with
Categories for Text Analysis.”
Articles
Making the Whole
Greater than the Sum of its Parts: Taxonomy Development as a Site of Negotiation and
Compromise in an Interdisciplinary Software Development Project
Jennifer C. Edmond, Trinity College Dublin; Alejandro Benito Santos, University of Salamanca; Michelle Doran, Digital Repository of Ireland; Roberto Therón, University of Salamanca; Michał Kozak, ; Cezary Mazurek, Institute of Bioorganic Chemistry of the Polish Academy of Sciences; Eveline Wandl-Vogt, Austrian Academy of Sciences; Aleyda Rocha Sepulveda, Austrian Academy of Sciences
Abstract
[en]
This paper describes the experience of a group of interdisciplinary researchers and
research professionals involved in the PROgressive VIsual DEcision-making in Digital
Humanities (PROVIDEDH) project, a four-year project funded within the CHIST-ERA call
2016 for the topic “Visual Analytics for Decision Making under
Uncertainty — VADMU”. It contributes to the academic literature on how
digital methods can enhance interdisciplinary cooperative work by exploring the
collaboration involved in developing visualisations to lead decision-making in
historical research in a specific interdisciplinary research setting. More
specifically, we discuss how the cross-disciplinary design of a taxonomy of sources
of uncertainty in Digital Humanities (DH), a “profoundly
collaborative enterprise” built at the intersection of computer science and
humanities research, became not just an instrument to organise data, but also a tool
to negotiate and build compromises between different communities of practice.
Visualization of Categorization: How to See
the Wood and the Trees
Ophir Münz-Manor, The Open University of Israel; Itay Marienberg-Milikowsky, Ben-Gurion University of the Negev
Abstract
[en]
In the article, we present, theorize and contextualize an investigation of
figurative language in a corpus of Hebrew liturgical poetry from late
antiquity, from both a manual and a computational point of view. The study
touches upon questions of distribution and patterns of usage of figures of
speech as well as their literary-historical meanings. Focusing on figures
of speech such as metaphors and similes, the corpus was first annotated
manually with markers on papers, and a few years later it was annotated
manually again, this time in a computer-assisted way, following a strictly
categorized approach, using CATMA (an online literary annotation tool). The
data was then transferred into ViS-À-ViS (an online visualization tool,
developed by Münz-Manor and his team) that enables scholars to “see the
wood” via various visualizations that single out, inter alia,
repetitive patterns either at the level of the text or the annotations. The
tool also enables one to visualize aggregated results concerning more than
one text, allowing one to “zoom out” and see the “forest aspect”
of the entire corpus or parts thereof. Interestingly, after visualizing the
material in this way, it often turns out that the categories themselves
need to be re-assessed. In other words, the categorization and
visualization in themselves create a sort of hermeneutical circle in which
both parts influence one another reciprocally.
Through the case study, we seek to demonstrate that, by using correct methods
and tools (not only ViS-À-ViS but others also), one can ultimately use
visualization of categorization as the basis for what might be called
established speculation, or not-trivial
generalization, which means, an interpretative act that tries
to be based on clear findings, while at the same time enjoying the
advantages of “over interpretation”. This approach, we argue, enables
one to see the trees without losing sight of the wood, and vice versa; or
“to give definition”
– at least tentatively – “to the microcosms and
macrocosms which describe the world around us”, be they factual or fictional.
Categorising Legal
Records – Deductive, Pragmatic, and Computational Strategies
Marlene Ernst, University of Passau, Department of Digital Humanities; Sebastian Gassner, University of Passau, Department of Digital Humanities; Markus Gerstmeier, University of Passau, Department of Digital Humanities; Malte Rehbein, University of Passau, Department of Digital Humanities
Abstract
[en]
Reprocessing printed source material and facilitating large-scale qualitative as well
as quantitative analyses with digital methods poses many challenges. A case study on
approximately 10,000 inventory entries for legal cases from the Special Court Munich
(1933–1945) highlights those and offers a glimpse into a digitisation workflow that
allows for in-depth computer-aided analysis. For this paper, different methods and
procedures for developing categorisation systems for legal charges are discussed.
From semi-structured
text to tangible categories: Analysing and annotating death lists in 18th century
newspaper issues
Claudia Resch, Austrian Academy of Sciences; Nina C. Rastinger, Austrian Academy of Sciences; Thomas Kirchmair, Austrian Academy of Sciences
Abstract
[en]
Annotating – understood here as the process in which segments of a text are marked as
belonging to a defined category – can be
seen as a key technique in many disciplines ,
especially for working with text in the Humanities [e.g. Unsworth 2000], the
Computational Sciences (e.g. ; ), and the Digital Humanities . In the field of Digital Humanities, annotations of
text are utilized, among other purposes, for the enrichment of a corpus or digital
edition with (linguistic) information (e.g. ; ), for close and distant reading methods (e.g.
), or for machine learning techniques (e.g. ). Defining categories to shape data has been used in
different text analysis contexts, including the study of toponyms (e.g. ) and biographical data (e.g. ).
The paper at hand showcases the use of annotations within the Vienna Time Machine
project (2020-2022, PI: Claudia Resch) which aims to connect different knowledge
resources about historical Vienna via Named Entity Recognition (NER). More
specifically, it discusses the challenges and potentials of annotating 18th century
death lists found in the Wien[n]erisches Diarium or
Wiener Zeitung, an early modern newspaper which was
first published in 1703 and has already been (partly) digitized in form of the
so-called DIGITARIUM : Here, users can access
over 330 high-quality full text issues of the newspaper which contain a number of
different text types, including articles, advertisements and more structured texts,
such as arrival or death lists. The focus of this article lies on the semi-structured
death lists, which do not only appear in almost every issue of the historical Wiener Zeitung, but are also relatively consistent in their
structure and display a high semantic density: Each entry contains detailed
information about a deceased person, such as their name, occupation, place of death,
and age.
Annotating these semi-structured list items opens up multiple possibilities: The
resulting classified data can be used for efficient distant or scalable reading,
quantitative analyses , and as a gold
standard for both rule-based and machine learning NER approaches (e.g. ). To reach this goal and as a first step of the
annotation process, the project team conducted a close reading of various death lists
from multiple decades to identify recurrent linguistic patterns and, based hereon, to
develop a first expandable set of categories. This bottom-up approach resulted in
five preliminary categories, namely PERSON, OCCUPATION, PLACE, AGE and
CAUSE-OF-DEATH, which were color-coded and,
accompanied by annotated examples, documented in the form of annotation guidelines as
intersubjectively applicable and concise as possible. These guidelines were then used
by two researchers familiar with the historic material to annotate a randomly drawn
and temporally distributed sample of 500 death list entries in the browser-based
environment Prodigy (https://prodi.gy). Hereby,
the emphasis was put especially on emerging “challenging” cases, i.e. items
where annotators were in doubt about their choice of category, the exact positioning
of annotations or the necessity to annotate certain text segments at all. Whenever
annotators encountered such ambiguous items, these were collected, grouped and – as a
third step in the annotation process – discussed with an interdisciplinary group of
linguists, historians and prosopographers. Within this collective, a solution for
each group of issues was agreed on and incorporated into the annotation guidelines.
Also, existing categories were revised where necessary. The new, more stable category
system was then again used for a new sequence of annotation and discussion of
ambiguities, resulting in an iterative process where annotation and category
development became intertwined. This approach, explained in the article in more
detail, demonstrates that tagsets are never entirely final, but always depend on
particular knowledge interests and data material and that even the annotation of
inherently semi-structured lists requires continuous critical reflection and
considerable historical and linguistic knowledge.
At the same time, it can be exemplified by this work that it is precisely these
“challenging” cases which carry a great potential for gaining knowledge and
can be considered central to the development of a valid annotation system (cf. ).
Articles
Automated
Transcription of Gə'əz Manuscripts Using Deep Learning
Samuel Grieggs, University of Notre Dame; Jessica Lockhart, University of Toronto; Alexandra Atiya, University of Toronto; Gelila Tilahun, University of Toronto; Suzanne Akbari, Institute for Advanced Study, Princeton, NJ; Eyob Derillo, SOAS, University of London; Jarod Jacobs, Warner Pacific College; Christine Kwon, University of Notre Dame; Michael Gervers, University of Toronto; Steve Delamarter, George Fox University; Alexandra Gillespie, University of Toronto; Walter Scheirer, University of Notre Dame
Abstract
[en][en]
This paper describes a collaborative project designed to meet the needs of
communities interested in Gə'əz language texts – and other under-resourced
manuscript traditions – by developing an easy-to-use open-source tool that
converts images of manuscript pages into a transcription using optical character
recognition (OCR). Our computational tool incorporates a custom data curation
process to address the language-specific facets of Gə'əz coupled with a
Convolutional Recurrent Neural Network to perform the transcription. An
open-source OCR transcription tool for digitized Gə'əz manuscripts can be used
by students and scholars of Ethiopian manuscripts to create a substantial and
computer-searchable corpus of transcribed and digitized Gə'əz texts, opening
access to vital resources for sustaining the history and living culture of
Ethiopia and its people. With suitable ground-truth, our open-source OCR
transcription tool can also be retrained to read other under-resourced scripts.
The tool we developed can be run without a graphics processing unit (GPU),
meaning that it requires much less computing power than most other modern AI
systems. It can be run offline from a personal computer, or accessed via a web
client and potentially in the web browser of a smartphone. The paper describes
our team’s collaborative development of this first open-source tool for Gə'əz
manuscript transcription that is both highly accurate and accessible to
communities interested in Gə'əz books and the texts they contain.
ጥልቅ እውቀትን ለረቂቅ ጽሁፎች ስለመጠቀም
ሳሙኤል ግሪግስ፡ ኖተርዳም ዩኒቨርሲቲ፤ ጀሲካ ሎክሀርት፡ቶሮንቶ ዩኒቨርሲቲ፤ አሌክሳንደራ አትያ፡ ቶሮንቶ ዩኒቨርሲቲ፤ ገሊላ
ጥላሁን፡ ቶሮንቶ ዩኒቨርሲቲ፤ ሱዛን ኮንክሊን አክባሪ፡ አድቫንስድ ጥናት ኢንስቲትዩት፡ ፕሪንስተን ኒው ጀርሲ፤ ኢዮብ ደሪሎ
ሶ.አ.ስ. ለንደን ዩኒቨርሲቲ፤ ጃሮድ ጃኮብስ፡ ዋርነር ፓሲፊክ ኮሌጅ፤ ክሪስቲን ኮን፡ ኖተርዳም ዩኒቨርሲቲ፤ ሚካኤል ጀርቨርስ፡
ቶሮንቶ ዩኒቨርሲቲ፤ ስቲቭ ደላማርተር፡ ጆርጅ ፎክስ ዩኒቨርሲቲ፤ አሌክሳንድራ ግለስፒ፡ ቶሮንቶ ዩኒቨርሲቲ፤ ዋልተር ሸሪር፡
ኖተርዳም ዩኒቨርሲቲ።
መግለጫ
ይህ ጥናት የሚገልፀው የግዕዝ ቋንቋ ፅሁፍን እና ሌሎች መሰል ትኩረት ያልተሰጣቸውን፣ ባህላዊና እና ጥንታዊ ሥሁፎችን ለመማር
ወይም ለጥናት የሚፈልጉ ማህበረሰቦችን ፍላጎት ለማርካት የጥምር የጥናት ቡድናችን ስለቀረፀው ቀላል እና ሁሉም ሊጠቀምበት
ስለሚችል መሣሪያ(ዘዴ) ነው።፡ይህ መሣሪያ የብራና ፅሁፍን የመሰሉ ረቂቅ ፅሁፎች የተፃፉባቸውን ገፆች ምሥል በማንሳት እና
ፊደላትን ለይቶ በሚገነዘብ ጨረር (optical character recognition (OCR)) በመጠቀም ምሥሉን ወደ መደበኛ
ወይም ሁለተኛ ፅሁፍነት የመቀየር ችሎታ ያለው ነው። ይህ ኮምፒዩተር ላይ የተመሰረተ ዘዴ ወይም መሣሪያ የግዕዝ ቋንቋን ልዩ
ባህርዮች ለይቶ እንዲያውቅ ሲባል ስለቋንቋው ያገኘውን መረጃ ወይም ዳታ የመንከባከብ እና የማከም ሂደቶችን አልፎ እንደ አንጎል
ነርቮች መረብ እሽክርክሪት የሚመስል ኮንቮሉሽናል ሪከረንት ነውራል ኔትዎርክ (Convolutional Recurrent Neural
Network) በመያዙ ገጽታዎችን እና ምሥሎችን ወደ ፅሁፍ ይቀይራል። ይህ ለሁሉም ተጠቃሚዎች ክፍት የሆነው ጽሁፍ ለተማሪዎች
እንዲሁም ለኢትዮጵያ ጽሁፍ ጥናት ተመራማሪዎች የሚጠቅም ብቃት ያለው እና በቀላሉ በኮምፒዩተር ተፈልጎ ሊገኝ የሚችል ከመሆኑም
በተጨማሪ የግዕዝ ጽሁፎቹ የኢትዮጵያን እና የኢትዮጵያን ህዝብ ታሪክና ባህል ግዕዝን በዲጂታል/በኮምፑተር ቀርፆ በማስቀመጥ
በቀጣይነት እንዲኖር ያስችላል። አመቺ የሆነ ተጨባጭ ሁኔታ ሲኖር ደግሞ ይህ ለሁሉም ክፍት የሆነ የ OCR የግዕዝን ምስልን ወደ
ፅሁፍ የሚቀይር መሣሪያ ወይም ዘዴ ሌሎች ትኩረት ያላገኙ ረቂቅ ፅሁፎችንም እንዲያነብ ተደርጎ ሊሰለጥን ወይም ዲዛይን ሊደረግ
ይችላል። ይህ የፈጠርነው መሣሪያ/ዘዴ የተለመደውን ግራፊክስ ፕሮሰሲንግ ዩኒት (GPU) የተባለውን በኮምፕዩተር ምሥሎችን
የማንበቢያ እና ማሳለጫ ዘዴ መጠቀም አያስፈልገውም። በዚህም ምክንያት ከሌሎች ዘመናዊ የአርቲፊሻል ኢንተሊጀንስ (AI
systems ) ዘዴዎች አንፃር ሲታይ ሃይለኛ የኮምፒዩተር አቅም አይፈልግም። ይህንን መሣሪያ/ዘዴ ያለ ኢንተርኔት ወይም
በይነ-መረብ ከግል ኮምፒዩተር፣ በኢንተርኔት እንዲሁም ወደፊት ኢንተርኔት ባለው የእጅ ሥልክን በመጠቀም ማስኬድ ይቻላል። ይህ
ጥናት የሚገልጸው በአይነቱ የመጀመሪያ የሆነው እና ለሁሉም ክፍት የሆነ እንዲሁም በተገቢ ሁኔታ ጥራቱን ጠብቆ በጥምር
ተመራማሪዎቻችን የበለፀገው መሣሪያ/ዘዴ ለማናቸውም በግዕዝ መጽሀፍቶች እና ውስጣቸው በያዙት ፅሁፎች ላይ ጥናት ለማድረግ
ለሚፈልጉ ግለሰቦችም ሆኑ ማህበረሰቦች ሁሉ ጠቃሚ መሆኑን ለማስገንዘብ ነው።
Reconstructing historical texts from fragmentary sources: Charles S. Parnell and the Irish crisis, 1880-86
Eugenio Biagini, University of Cambridge; Patrick Geoghegan, Trinity College Dublin; Hugh Hanley, University of Cambridge; Aneirin Jones, University of Cambridge; Huw Jones, University of Cambridge
Abstract
[en]Charles Stewart Parnell was one of the most controversial and effective leaders in the
United Kingdom in the second half of the nineteenth century. Almost single-handedly, he transformed
the proposal of Home Rule for Ireland from a languishing irrelevance to a mass-supported cause.
Though the historiography on Parnell is substantial, his speeches – the main primary sources for
accessing both his thinking and strategies – have never been collected or edited. One of the core
questions in working towards an edition of his speeches was whether it would be possible
to use automated methods on these fragmentary sources to reconstruct what Parnell actually said in them.
We were also interested in how the reports varied, and what that variation might tell us about
the practices and biases of the journalists who wrote them and the newspapers which published them.
This article discusses the use of two digital tools in our attempts to answer these
research questions: CollateX, which was designed by Digital Humanities practitioners for the comparison
of textual variants, and SBERT Sentence Transformers, which establishes levels of similarity between texts.
In this article we talk about how the application of digital methods to the corpus led us away
from the idea of producing definitive reconstructions of the speeches, and towards a deeper
understanding of the corpus and the journalistic practices which went into its creation.
Discourse cohesion in
Xenophon’s On Horsemanship through Sketch Engine
Victoria Beatrix Fendel, University of Oxford; Matthew T.Ireland, Sidney Sussex College, University of Cambridge
Abstract
[en]
We build a Sketch Engine corpus for Xenophon’s classical Greek scientific treatise
On Horsemanship. Sketch Engine is a web-based
corpus-analysis tool that allows the user to inspect the lexical makeup of a text
(cf. keyword lists), explore the surroundings of select items (cf. concordances) and
identify fixed expressions in a text (cf. n-grams). We make available our
corpus-preparation tool and our corpus configuration file for Sketch Engine. We use
the Sketch Engine corpus to detect discontinuous verbal multi-word expressions,
specifically support-verb constructions (e.g. to take a
decision). We examine how support-verb constructions – through their
structural and lexical properties – aid discourse coherence and cohesion throughout
Xenophon’s treatise. We furthermore examine how the recurring support-verb
constructions in the treatise reflect the scientific register of the text. The
article shows how an understudied category of lexico-syntactic device (support-verb
constructions) in classical Greek majorly aids discourse cohesion, structurally and
contextually speaking. It also shows how an understudied text in the form of a
technical treatise (On Horsemanship) majorly furthers
insight into scientific literacy of the classical period. Finally, by making
available our corpus-preparation tool and code, we hope to further collaboration and
adaptation and thus improvement of existing tools and counteract the multiplication
of tools.
History Harvesting: A Case Study in Documenting Local History
Kimberly Woodring, Department of History, East Tennessee State University; Julie Fox-Horton, Cross-Disciplinary Studies, East Tennessee State University
Abstract
[en]
As a case study for the practice and application of digital history in a mid-size
university history department, this paper analyzes two History Harvest events
undertaken in a split-level digital history course. By examining the results of
two local History Harvests, specifically through participation of the greater
community, outside the university, and the preservation and digitization of the
local historical items, we discuss the impact history harvests can have on a
community, as well as history students. The primary goal of both History
Harvests outlined in this paper was to work with the local community
surrounding the university to preserve pieces of local history. This article
provides guidelines for conducting a History Harvest including suggestions for
community outreach, local university involvement with the greater community, and
digitizing issues that might occur while conducting the Harvest.
Cluster Analysis in Tracing Textual Dependencies – a Case of Psalm 6 in 16th-century English Devotional Manuals
Jerzy Wójcik, The John Paul II Catholic University of Lublin
Abstract
[en]
This article uses cluster analysis in order to track textual affinities and identify the sources of different versions of historical texts on the basis text of Psalm 6 found in the 16th-century English manuals of devotion. The article offers a brief overview of the manuals of prayer examined, describes methods of cluster analysis used within the present work, and shows how cluster analysis can enrich and guide traditional philological knowledge.
Project Quintessence: Examining Textual
Dimensionality with a Dynamic Corpus
Explorer
Samuel Pizelo, UC Davis; Arthur Koehl, UC Davis; Chandni Nagda, University of Illinois at Urbana-Champaign; Carl Stahmer, UC Davis
Abstract
[en]
In this paper, we present a free and open-access web tool for exploring the
EEBO-TCP early modern English corpus. Our tool combines several unsupervised
computational techniques into a coherent exploratory framework that allows for
textual analysis at a variety of scales. Through this tool, we hope to integrate
close-reading and corpus-wide analysis with the wider scope that computational
analysis affords. This integration, we argue, allows for an augmentation of both
methods: contextualizing close reading practices within historically- and
regionally-specific word usage and semantics, on the one hand, and concretizing
thematic and statistical trends by locating them at the textual level, on the
other. We articulate a design principle of textual dimensionality
or approximating through visualization the abstract relationships between words
in any text. We argue that Project Quintessence
represents a method for researchers to navigate archives at a variety of scales
by helping to visualize the many latent dimensions present in texts.
The Digital Environmental Humanities (DEH)
in the Anthropocene: Challenges and Opportunities in an Era of Ecological
Precarity
John Ryan, Southern Cross University; Lydia Hearn, Edith Cowan University; Paul Longley Arthur, Edith Cowan University
Abstract
[en]
Researchers in the complementary fields of the digital humanities and the
environmental humanities have begun to collaborate under the auspices of
the digital environmental humanities (DEH). The overarching
aim of this emerging field is to leverage digital technologies in
understanding and addressing the urgencies of the Anthropocene. Emphasizing
DEH’s focus on natural and cultural vitality, this article begins with a
historical overview of the field. Crafting an account of the field’s
emergence, we argue that the present momentum toward DEH exhibits four
broad thematic strains including perennial eco-archiving; Anthropocene
narratives of loss; citizen ecohumanities; and human-plant-environment
relations. Within each of the four areas, the article identifies how DEH
ideas have been implemented in significant projects that engage with,
envision, re-imagine, and devise communities for environmental action and
transformation. We conclude with suggestions for further bolstering DEH by
democratizing environmental knowledge through open, community-engaged
methods.
DH as Data: Establishing Greater Access through
Sustainability
Alex Kinnaman, Virginia Tech; Corinne Guimont, Virginia Tech
Abstract
[en]
This paper presents methodology and findings from a multi-case study exploring the use of
preservation and sustainability measures to increase access to digital humanities (DH)
content. Specifically, we seek to develop a workflow to both prepare DH content for
preservation while enhancing the accessibility of the project. This work is based on the
idea of treating DH as traditional data by applying data curation and digital preservation
methods to DH content. Our outcomes are an evaluation of the process and output using
qualitative methods, publicly accessible and described project components on two Virginia
Tech projects, and a potential workflow that can be applied to future work. By breaking
down individual projects into their respective components of content, code, metadata, and
documentation and examining each component individually for access and preservation, we
can begin migrating our digital scholarship to a sustainable, portable, and accessible
existence.
Visualizing a Series: Aggregate Compositional Analysis of Botticelli's Commedia
Nathaniel Corley, Amherst College
Abstract
[en]
Applying digital methods as inputs to an interpretive process, I expose compositional motifs within Sandro Botticelli's momentous
Divina Commedia codex that depart from canonical manuscript illustrations. I then situate these
visual findings within Quattrocento literary and artistic theory, arguing that Botticelli manipulated his compositional structures
to harmonize with the humanist Cristoforo Landino's interpretation of the Commedia as an allegory for the
soul's ascension from “disorder” to “order”. By leafing through the pages of Botticelli's manuscript and perceiving
the striking structure and style of the illustrations, the observer could experience the incremental progress of Dante
the Pilgrim’s soul — and perhaps the viewer’s own — through the different stages of hell to paradise. Ultimately, I reflect on
the implications of digital methodologies within art history, and how these techniques may enrich or even challenge traditional
modes of “seeing” works of art.
Starting and Sustaining Digital Humanities/Digital
Scholarships Centers: Lessons from the
Trenches
Lynne Siemens, University of Victoria
Abstract
[en]
Along with the growth in Digital Humanities (DH) and Digital Scholarship (DS) as
digital methods, resources, and tools for research, teaching and dissemination,
interest in starting DH/DS centers as a means to support and sustain researchers and
projects is fast increasing. For those leading these initiatives, it raises
questions about the ways to engage possible stakeholders to develop support for a
centre, apply existing models, secure funding sources, and many others. This article
contributes to this discussion by examining the experiences of ten DH/DS centers in
North America and discerning smart practices for those wishing to start a similar
center. Often started by faculty or administrative champions, the interviewed
centers have a long history of operations. They offer a suite of activities and
services, ranging from consulting, training, access to technology, project support,
and others, with staff drawn from libraries, faculties, student ranks, and other
locations. These efforts support teachers, researchers, and students in their
efforts to undertake DH/DS projects. The centers are often funded through a
combination of base budgets and soft money and may be based in a library or faculty.
The paper concludes with implications for practice for those wishing to start their
own DH/DS center.
Author Biographies
URL: http://www.digitalhumanities.org/dhq/preview/index.html
Comments: dhqinfo@digitalhumanities.org
Published by: The Alliance of Digital Humanities Organizations and The Association for Computers and the Humanities
Affiliated with: Digital Scholarship in the Humanities
DHQ has been made possible in part by the National Endowment for the Humanities.
Copyright © 2005 -

Unless otherwise noted, the DHQ web site and all DHQ published content are published under a Creative Commons Attribution-NoDerivatives 4.0 International License. Individual articles may carry a more permissive license, as described in the footer for the individual article, and in the article’s metadata.
Comments: dhqinfo@digitalhumanities.org
Published by: The Alliance of Digital Humanities Organizations and The Association for Computers and the Humanities
Affiliated with: Digital Scholarship in the Humanities
DHQ has been made possible in part by the National Endowment for the Humanities.
Copyright © 2005 -

Unless otherwise noted, the DHQ web site and all DHQ published content are published under a Creative Commons Attribution-NoDerivatives 4.0 International License. Individual articles may carry a more permissive license, as described in the footer for the individual article, and in the article’s metadata.
