Abstract
Despite significant investments in the development of digital humanities tools,
the use of these tools has remained a fringe element in humanities scholarship.
Through an open-ended survey and virtual panel discussion, our study outlines
the experience of historians using various digital tools. The results of the
study reveal the variety of users interested in digital tools as well as their
enthusiasm, reactions, and frustrations, including the expectations and
confusion that has created barriers to tool use and to the wider adoption of new
research methodologies. We suggest that an emphasis on cultivating a broader
audience must be a concern not only for tool builders but also for funders to
account adequately for the time and expense of quality interfaces and
documentation.
Introduction and overview
Despite significant investment in digital humanities tool development, most tools
have remained a fringe element in humanities scholarship. As the 2005 Summit on
Digital Tools at the University of Virginia found, “only about six percent of humanist
scholars go beyond general purpose information technology and use
digital resources and more complex digital tools in their
scholarship”
[
Summit 2006]. Although this percentage is probably higher now, it is still far lower
than most digital humanists would prefer. Such low adoption has led to some
questions about how much adoption is a priority for tool builders, such as how
much effort tool builders put into getting their tools used by practicing
scholars [
Bradley 2008a]. More recently, another study indicated
that only about half of tool developers considered the number of users that
adopted a tool as an indicator of a success, only about a third ran usability
studies, and a disappointing 14% conducted surveys about the tool [
Schreibman and Hanlon 2010].
Lack of adoption has remained a difficult problem to solve, partly because it has
been unclear how scholars expect tools to behave, how they want to interact with
different kinds of tools, and how they perceive their present and future
utility. Funding from the National Endowment for the Humanities has allowed the
Roy Rosenzweig Center for History and New Media to extend and formalize some of
its ongoing dialog with users about the general usability of a handful of
research tools for digital history, especially for text-mining and visualization
— two of the more active areas of inquiry and tool development in recent
years.
Through an open-ended survey and a virtual panel discussion, our interested but
skeptical audience provided their take on the usability of digital tools and
resources targeted at historians. The results of the survey and panel discussion
reveal the variety of users interested in digital tools and their expectations,
as well as the typical confusion that perhaps has created barriers to tool use
and to wider adoption of new research methodologies. Although our results are
neither comprehensive nor definitive, we hope that the comments and attitudes of
the users will help tool builders produce more effective and more widely used
tools for humanities research. We recognize that history is a diverse field, and
that comments from comparatively few cannot possibly represent the views of all
historians. Nevertheless, we think our results help elucidate the needs and
expectations of a broad scholarly audience.
We provide more details about the survey and panel discussion in the following
sections, but a quick overview here will highlight some of the most important
findings. Our survey revealed that most respondents use technology to speed up
existing research practices — a result that has serious consequences for tool
builders who are trying to encourage new research methodologies. Builders must
be acutely aware that tool users have not necessarily made any commitment to
neither the tool nor the methodology, but are merely exploring research
possibilities. At least in our survey, when the utility or payoff was not clear,
our users tended to get frustrated and abandon the tool. With this in mind,
digital humanists should refrain from complaining about the limited acceptance
of their methodologies when the tools that are meant to promote them remain
inaccessible to a more general humanities audience.
Our panel discussion echoed what others have written prescriptively about tool
building, and, unfortunately, suggested that the digital humanities community
has not made substantial progress on these fronts. In 2003, when John Unsworth
called for digital humanists to better “demonstrate the usefulness of
all the stuff we have digitized," the number of available tools was far
fewer”
[
Unsworth 2003]. But even with more and better tools today, tool builders need to ensure
that exactly what their tools offer is crystal clear. Our survey also reiterated
the need for “community building and
marketing functions" as "few projects do the necessary branding,
marketing, and dissemination" required for substantial adoption”
[
Cohen et al 2009]. The voices of our respondents nicely complement the advice delivered in
a relatively recent CLIR report directed toward digital humanities centers [
Zorich 2008]. Because many more tools are being produced outside
of DH centers, and the user base has broadened even further, getting the
expectations and complaints of users in their own words perhaps will help to
provide a clearer path forward for tool builders.
Not long ago, Christine Borgman argued that “until analytical tools and
services are more sophisticated, robust, transparent, and easy to use
for the motivated humanities researcher, it will be difficult to attract
a broad base of interest within the humanities community”
[
Borgman 2009]. Our results suggest that tools do not need to be more sophisticated
(because this increases skepticism and decreases the possibility of a modular
approach to building and using simple, intuitive tools), but that ease of use
and transparency are far more important. Overall, our respondents were
frustrated by a number of their expectations going unfulfilled, namely intuitive
interfaces, clear documentation, real-world examples, and help with
understanding how a given tool interfaces with data. But it is perhaps best not
to think in terms of ease of use and transparency as features of a tool, but
rather in terms of how tool builders attempt to engage and develop a
relationship with tool users. Building on the sentiments of one of our
respondents, we have organized our remarks below around the principle that tool
builders must consider themselves as entering into a social contract with tool
users. We highlight below what we consider the key features of this
contract.
Taken as a whole, the tenor of the responses suggests that digital humanities
tool development projects ought to increase the scope of their imagined
audience. Tool interfaces must help more “traditional”
historians feel more comfortable with new ways of visualizing, analyzing, and
thinking about sources and about data. Furthermore, the envisioned audience
should not only be those trying to use the tool, but those trying to understand
what it can do and why it matters. Even scholars who are disinclined to use such
tools themselves ought to be able to understand what it is that other people are
doing with them and how their sophisticated use might well constitute valuable
scholarship in itself.
Survey of existing practices
We first developed a survey that allowed a diverse group of historians to voice
their interests, concerns, and views about digital research methodologies. We
published the survey openly online, and invited participation through the
popular History News Network and through direct solicitations of graduate
history departments. Our 213 respondents (mostly from North America and Western
Europe) were about equally split between graduate students and professors at
various stages of their careers. These historians shared roughly 500-word
reactions and thoughts about their experiences using digital tools in their
research. Our aim was to provide a space for historians to present their own
ideas, not simply an opportunity to agree or disagree with our ideas, or to
simply select from predetermined choices as per a formal survey. In other words,
this was a qualitative rather than a quantitative survey. As such, we asked our
participants to answer a set of six open-ended questions about how they used
existing tools and resources, and the extent to which these tools and resources
met their needs. Below we summarize the most common sentiments relating to how
these historians viewed their own tool use.
Technology is used principally to speed up traditional research
methodologies. When asked to “give examples of how digital
resources and tools are allowing you to do research (and to learn things)
that you couldn’t have done ten or twenty years ago” our historians gave
two primary responses. First, they focused on ease of access and the way in
which various databases allow them to more quickly access things they would have
otherwise walked to the library for. As one user reported, “I start with
Google to get sense of what might be out on the web and then I go to
specific resources that I know of such as the IMB, WorldCat, etc.”
Second, they reported that they use Google and Google Books to track down
unusual terms and unique quotations. Much like Patrick Leary described in his
2005 article, “Googling the Victorians,” these
historians use Google search and Google Books as a means to find references to
obscure people and events [
Leary 2005]. One typical response:
“If I type a person's name or event (especially obscure ones) into Google
books, I can find works outside of my major field that make reference to
them.”
Extensive use of Google and JSTOR suggest that resources should be
visible with high- level searches. When asked “which online or
digital resources have you used in your scholarly research and writing,”
almost all resources mentioned are repositories of either primary or secondary
source material. Google services were mentioned a total of 100 times by 70
participants, and JSTOR was mentioned 99 times by 98 participants. 72
participants made reference to “library” or “libraries” sites, 20 of
which were references to the Library of Congress Online Catalog. For comparison:
25 participants mentioned ProQuest, 23 mentioned ARTstor, and 19 mentioned
Wikipedia. One of the key reasons individuals gave for using these specific
sites was their ease of use; individuals reported feeling lost on many other
sites that did not offer an intuitive interface. Users reported that they were
generally unwilling to dig deep into sites to find information. In short, the
lesson here is that information needs to be visible with high-level
searches.
Respondents preferred quantity to quality with digitized primary and
secondary sources. Those surveyed expressed overwhelming gratitude
for the availability of online primary and secondary source material. In
contrast to other disciplines like philology or textual criticism, where exact
transcription is crucial, historians frequently preferred resources that offer
large quantities of materials with even a crude full-text component. This
sentiment likely reflects their primary use of technology, namely that finding
references and information is a much higher priority than using tools to analyze
primary sources (which generally requires greater textual fidelity). At the same
time, the respondents were concerned about gated access to resources and the
quality of the resources that are freely available online. Respondents referred
several times to the adage “you get what you pay
for” in reference to the quality of non-copyrighted materials that
are often used to create full-text resources. This was in reference to the fact
that many easily available texts are late nineteenth- or early twentieth-century
editions or translations that have been superseded by more recent scholarship.
But because these older ones are now in the public domain, they can be scanned
and ingested into large repositories like the Internet Archive.
Knowledge of digital tools for historical analysis has not become very
widespread. While our respondents were excited about repositories of
primary source materials online, they made little comment about a need for, or
interest in, any specific tools to help make use of these archives in novel
ways. With the exception of a few mentions of Zotero (11) and Endnote (5), there
was no mention of third-party tools, or of methodologies involving text or data
analysis and visualization. Nor was there significant mention of any emergent
technologies, like projects and standards that leverage the semantic web for
information discovery. This might suggest that there is little general interest
in such tools and technologies, but as our panel discussion (as described below)
suggests, the average historian might be considerably more interested in using
digital tools for research than has been recognized.
Overall, the results confirm what one may have expected: the uses of
digital tools among our respondents are of the most general
kind: Google searches and the use of digitized primary and secondary sources. On
one level, then, many historians use tools for little more than speeding up
their existing practices. However, the emerging practice of using Google
searches to find obscure terms in obscure texts from other disciplines suggests
that even basic search technology is expanding, even if only slightly, standard
historical methodologies. As these historians try out searches for terms in the
corpus of books Google has digitized they are integrating this digital corpus
into the hermeneutic process of testing ideas, theorizing about the past, and
even conceptualizing a new kind of historical archive.
Panel discussion
To explore more deeply scholars' perspectives on the potential value of specific
tools, we convened a smaller panel from the survey participants by emailing all
those who responded to the survey and asking for more involved participation
over a longer period of time. Over the course of four months the self-selected
panel discussed seven different specific digital tools at
nexthistory.org, generating 130
comments about the utility and usability of the tools. All questions were posed
to the entire panel. Even though the number of panel participants does not
constitute an especially large sample size and cannot possibly represent all
historians, the comments they produced were remarkably consistent and revealing.
We think that the concerns they expressed are broadly relevant for tool building
in the digital humanities, especially history. Our criteria for what might be
counted as a “tool” were
broad, and it could be argued some of the “tools” that our panel evaluated aren't best
called tools at all (in some cases perhaps a platform, or a particular
visualization, for example), but we decided to include a variety of freely
available web services that could potentially help historians manipulate and
understand historical texts or data. We thought that a broad definition of tool
would yield the most robust feedback that could speak to the variety of tools
now being produced rather than to a narrow subset of them. Comments from the
respondents add some specificity to some previous surveys, such as a CLIR report
on digital tools, which employed an even larger definition of “tool” and focused on
accessibility [
Shilton 2009].
We chose to solicit comments on tools that we thought together represented a wide
range of design, functionality, sophistication, and that showed promise of
applicability to a variety of fields within history and across other
disciplines. Each of these tools has indubitably advanced both practice and
theory in the digital humanities, and any criticism of individual tools and
projects (both here and from the panel discussion itself) should not detract
from the important contributions these tools have and will continue to make to
the field. Without such tools already in place, it would be impossible to have a
practical discussion of ways to make tools even better and more widely used. Nor
is this to say that these tools have been judged more important than others that
were not discussed. However, we do not believe that having chosen more or other
tools would have solicited substantially different feedback.
A number of the tools focused on visualization. Many respondents mentioned that
the ways in which they were prompted to think about visualizing their data was
quite useful to them in ways that they did not expect. “Thinking through what I might do and
how I might present it,” wrote one respondent, “...has helped to sharpen my analysis
of my research and find areas where I need to or would like to know
more.” Another panelist, thinking even more broadly, “...could imagine a
visualization that might alter some of the standard narratives.” On
the other hand, panelists also revealed how striking visualizations are not
necessarily inspiring to those who would prefer to continue thinking in terms of
texts. Regarding the
Favoured Traces visualization that shows how
On the Origin of
Species changed from one edition to the next, one commenter lamented
a distancing from the real text: “I guess I do not see how it would
work. The information is shown as flashing lights, rather than the actual
text and I did not see anything that looked like page numbers.” Such
a sentiment serves as a reminder that users may expect to see traditional
devices like page numbers, even with online texts or visualizations for which
they are not entirely appropriate. Interface designs conscious of such users
will help more “traditional” historians feel more comfortable with new ways of
visualizing, analyzing, and thinking about sources.
As some of the above quotations suggest, the tools that had the most positive
feedback were the ones that easily allowed historians to develop quick views of
their data from multiple perspectives — especially if the tools demonstrated
potential to help historians navigate the overwhelming amounts of material that
they had accumulated. Users wanted to explore and play with ideas quickly. In
fact, they commented on how they much preferred “easy access”
tools to those that created (or tried to create) a polished visualization. One
of the more popular, if simple, examples was
Wordle, a tool that helps visualize
word frequency within a given text with a stylish word cloud. One user commented
that the ease of use made him want to explore more about how he could use
linguistic analysis in his historical research: “If I had done this at the beginning of my
research of these texts, it might have inspired me to take my analysis in a
different direction.” Especially for their ability to generate quick
views of data, respondents were enthused about the teaching potential of the
tools for giving quick impressions of texts in the classroom. This sentiment
signals that the classroom may be an excellent place to encourage use of some of
the more straightforward tools, even if in a limited capacity. With familiarity
and comfort, a user might well begin to push the tool a bit harder for their own
research purposes.
Beyond the benefits of quick visualization, however, users found more to dislike
than to like about the tools overall. This appears less a result of the
tools per se than of the substantial gap between what the user expected the tool
to do (though this generally came from the tool documentation — more on this
later) and what the user could get it to do. Comments suggest that our
non-technical users either could not generally appreciate what several of the
more complex tools were designed to do, or were unable to recognize their
potential value in historical research. They complained that much of the
documentation was written in a highly technical fashion, and was virtually
unintelligible to them. Many panelists struggled with
SEASR (The Software Environment for the
Advancement of Scholarly Research), which provides a range of text analysis
tools in a virtual environment. Although the website has some helpful videos and
other well- written descriptions, the overall documentation was considered to be
far too technical. One panelist remarked that using the site “felt as if I was testing for a
foreign language test without enough study.” In many cases,
individuals had little ability to imagine what any of the individual tools might
be truly useful for. One of the most commonly expressed sentiments was a
variation of the statement expressed most succinctly by one panelist: “I think it will be useful, but
I’m not sure how yet.”
In order to help bridge the expectation gap, one panelist cogently suggested that
tool builders ought to think about their work as establishing a social contract
between them and the user. For a user to even consider using a tool, the tool's
website needs to establish that the time devoted to deploying it will generate
results that warrant the investment. One user wrote, “So I think that the continued creation of these tools is really
important, and think that they need to be explained, with some real-world
examples, so that we can have a better sense whether the tool can do what we
want.” In short, users wanted the theoretical benefits spelled out in
plain language irrespective of discipline. Panelists both implicitly and
explicitly suggested that tool builders might pitch their tools in slightly
different ways. They might, for example, focus more on the immediate research
convenience that the tool provides. As one user commented, “I am skeptical that it would reveal ‘hidden information’ — but if it convinced me that I
could save time — and that it was reliable and worked across different
languages — then I’d be all ears.” Perhaps this is to say that the
language used to educate users about the tool needs to be different than the
language used to secure funding for a project. Grant language tends to emphasize
innovation and revolutionary benefits over more modest quotidian uses. But it's
the everyday uses that new users are more likely to find immediately helpful to
their own research. Although less impressive-sounding, a more realistic
description of the benefits and limitations could encourage wider adoption of
tools, especially considering that most users will not be prepared to make
sophisticated and revolutionary use of the tool right away.
Perhaps growing out of the confusion about the possible research utility of a
given tool, another common complaint was that the tools were not really doing
history, in the sense of creating meaning from data. In response to tools that
performed some kind of linguistic analysis, many users questioned how much data
analysis can tell us about content and meaning. In reference to SEASR, for
example, one panelist complained that “whereas
it may help determine frequency or clustering, it doesn’t tell me how or
why. As I have indicated with other mining tools, this kind of tool can only
take me so far, then I must consult other sources and methods to know how
and why something happened in the past.” Similarly, there seemed to
be confusion about what the Favoured Traces website was meant to do or show. One
panelist felt compelled to point out that “it is
really better suited to directing you to portions to study more closely than
it is as a tool to actually do the studying.” Such direction, of
course, is one of the main goals of the visualization. Similarly, there was also
concern that the use of these tools strayed too far beyond the fundamental
purpose of the humanities: “I could do with less
jargon and more about how to use it. I am suspicious of the flow charts,
perhaps that’s just me, but I thought the humanities are about matters that
resist measure-and-manage control.”
Needless to say, these important epistemological concerns are neither new nor
exclusive to the digital humanities. What is worth noting, however, is that
users generally identified perceived shortcomings of textual analysis or
visualizations as problems with an individual tool, not with the methodology
itself. On one hand, their comments may be an inadvertent conflation of a tool
and the research methodology it facilitates. On the other hand, they may suggest
that our non-technical users have a fundamental misunderstanding of what
technology is truly capable of in terms of historical work. Perhaps they were
led to believe that the tools were capable of much more than they are — and
probably more than even the tools' creators would claim. For example, one
respondent asked if
Mark Davies's Time
Magazine corpus (and the interface to it) constitutes “a successful
interpretive tool in itself or only a step along the way to understanding
and interpretation.” Not many tool builders would likely claim that a
visually attractive word frequency diagram could possibly be considered a
valuable interpretation in its own right. They must, however, be aware of this
mindset. Addressing it directly will help users understand not only what the
tools can actually do, but also what profitable applications of the tool might
be in the course of their research.
Social contracts between tool builder and tool user often center around using
data in new ways, and it was the “if you can get it to work” clause of the contract that left so many
users frustrated. The difficulties surrounding data standardization became very
apparent. When asked to evaluate a
Shaping the West
project from the Stanford Spatial History Project that produces a
dynamic visualization of board members of U.S. railroad companies from
1872-1894, several respondents complained that they wanted to see
their own data represented in a similar fashion. However, they didn’t ask about
or comment on the level of difficulty to do that (which would be rather
substantial). When presented with
Many
Eyes, an online tool from IBM that allows standardized data to be
uploaded and viewed in a variety of formats (though nothing as visually dynamic
as
Shaping the West), users complained that they
couldn’t get the data to display as they wished. Users were unclear about how to
standardize the data in a useful way, even when they knew it needed to be
done.
The visible problems with conceptualizing and manipulating data encountered by
our users illustrates the difficult partnership between technologist and
scholar, and underscores the need for tools to help promote data literacy. Tool
builders must provide clear — but not overly simplistic — sample use cases that
illustrate the necessary preparatory steps for using data with a tool. These
tutorials must be in addition to tutorials on using the tool itself after data
has been properly prepared. Many Eyes, for example,
largely assumes that one can and will figure out how to format the data to get
the desired representation. For the kinds of researchers who built the tool (and
perhaps its intended audience), it may be obvious how data must be formatted, or
intuitive to figure out. For our historians, however, such tools were perceived
as little more than a black box. Inability to represent the data in different
ways was thought to be the fault of an uncooperative interface rather than a
problem with how the data was uploaded. It seems that providing more education
about manipulating data is essential to improve adoption of tools that depend on
carefully formatted data. Recognizing that everyone has idiosyncratic work
habits, we do not suggest that everyone must organize data in the same way.
Rather, we hope that tools will begin to play a more active role in educating
users about the data formats most useful to the tool, and also in providing
examples of the most common transpositional steps required to interface data
with the tool.
Even if users were able to create visualizations, they were often unclear about
what to do with the tool output. One respondent lamented that “most of these [tools] just produced fairly pretty
looking or complicated displays that I couldn’t really figure out how to
use. The things that I am looking for are ways to take structured data, that
I might gather about events, organizations, people, etc. and create
interactive maps, or visualizations of the links between people.”
This is obviously easier said than done. But this also suggests that because
interaction is crucial, the user-interface cannot be thrown together after the
fact. Furthermore, both the interface and documentation must err on the side of
obvious rather than clever. For example, several visitors to Favoured Traces commented that they found it utterly
useless because you could not see the text itself, but only a representation of
it. In fact, simply hovering over the “visual
text” reveals the actual text, and not much more work by the user can
deliver a file of the entire edition.
Lessons from respondents
So much for the major criticisms from our panelists. With the goal of making
tools more easily adopted by the average historian, we have tried to distill the
suggestions from our users into a comparatively small set of considerations for
tool builders. These might be considered the key features of the contract
between tool builder and tool user. Most are not groundbreaking suggestions, but
they underscore the real needs of interested users — needs that have not been
adequately met. And as stated earlier, digital humanists should refrain from
complaining about the limited acceptance of their methodologies when the tools
and techniques that they have developed remain opaque or even unintelligible to
an interested, general humanities audience.
Tools have generally neglected the typical humanities user in their
design and documentation. Builders of digital humanities tools,
especially those that deal with technically more sophisticated techniques,
like text mining and visualization, could considerably increase their tools'
visibility and speed of adoption with more attention to their user interface and
clear instructions with example use cases. The intended audience of most tools,
to the extent that a discernable one presents itself, seems to be technically
sophisticated users who are already sold on the value and utility of the tool
and who are willing to play around with the tool to get a sense of its
possibilities. But as our panelists' interests suggest, the potential audience
is far larger. The philosophy of “Don't Make Me
Think” comes to mind as the kind of usability experiences the
participants expected [
Krug 2005]. Scholars approach scholarly
software as software first and scholarship second. Any intellectual nuance that
might be useful to the visitor must come after having met their expectations for
web and software design. That means simply and clearly indicating what the tool
is, how one uses it, and making it as easy as possible for one to get started
and to see some initial results, even if only approximate, from the tool. While
it is important to minimize the black-box problem by explaining how the tool
works, it is equally important that such explanations don't crowd out a more
basic explanation for new users.
Provide concrete examples and explain the methodological value.
Documentation needs to be non-technical in two ways. First, and most obviously,
it must explain the basics of how to operate the tool, and it would be extremely
helpful to provide examples about using the tool itself — that is, to present
specific examples of analysis across several disciplines. The cost outlay to
create such content is not negligible, but the benefit would be
disproportionately high. As one participant noted, “I think that the continued creation of these tools is really
important, and think that they need to be explained, with some real-world
examples, so that we can have a better sense whether the tool can do what we
want.” This also suggests that users might benefit from some
explanation about the tool that could help them do things that they didn't know
they wanted to do. The second, and perhaps more crucial aspect of the
documentation, will explain in general terms how the methodology of the tool can
be useful. This would provide important motivation for the curious scholar who
has come across the tool (or has been directed to it) but remains skeptical of
the value of a new methodology. Documentation should explain, with examples, how
the research methodology that the tool embraces can be useful and appeal to
users across a variety of disciplines. Especially if tool builders typically see
their role as making a methodology more accessible to scholars, they should
include some justification and explanation of the methodology in their
documentation. Even if methodological diffusion is not the principal goal of the
project, explicit attention to it will only further the larger mission and
adoption of the tool.
Be clear about the limitations of the tool and set reasonable
expectations. Though it may appear obvious to the technically
sophisticated humanist tool producer, tool introductions need to be clear that
the tools themselves neither function as substitutes for historical research nor
attempt to produce historical knowledge. It cannot be overemphasized to the new
user that the tools simply facilitate historical research by revealing trends or
themes that might have otherwise gone unnoticed, and that to interpret what such
trends or themes might mean remains the work of the historian. For the time
being, then, tool builders might tone down the rhetoric about the interpretive
power of the tools and how they can revolutionize research. Similarly, they
should be encouraging users to think more deeply about the way tools create
different views of, and interactions with, information which can then be
interpreted and used as a means for developing and exploring historical
arguments. Certainly, technically sophisticated users will have a better
understanding of how a tool works, and will use the tools in more complex ways
to facilitate their own analysis. But this should not be the only audience that
developers try to engage with.
Allocate more resources to user interface development. The user
interface for many digital projects often seems developed as an afterthought,
thrown together after completing the core functionality. However, a truly good
user interface requires significant investment in design and development that
needs to be integrated into the project timeline and budget. It needs to be
flexible to accommodate expanding tool features. Scholarly software designers
should more consider the research on user-centered design approaches (e.g. [
Brown 2006e]; [
Garrett 2002]) and theories
associated with user experience. Development should also include extensive
testing. The bugs and crashes frustrated many panelists. Though some instability
is unavoidable with prototype tools, scholars were almost resentful that they
had invested time in a tool that was wasted out because of a critical failure,
which in turn lessened the likelihood they will return to the tool even after
stability is improved.
No tool is an island; tools must support combinatorial approaches to
data. As digital tools become more easily accessible and more
primary sources become available online, data standardization becomes even more
crucial. To this end, tool builders might collaborate with data repositories and
other tools to encourage compatibility between different formatting standards.
This is not to say that all humanists and repositories must adhere to the same
standards or data formats. No single approach can possibly accommodate the
myriad kinds of resources and institutions that are making data available. But
there is a willing audience at hand, and some explicit training about data
standardization — especially since it's not exactly widespread in typical
humanities training — could offer a tremendous boost to tool adoption. Our
panelists were excited and inspired by the visualizations of Shaping the West; their technological uncertainty
hardly deterred them from attempting new visualizations with Many Eyes. However, data roadblocks were fatal. For
the former, users found that substituting their own standardized data was
impossible; for the latter, users found it too difficult to standardize their
data in an appropriate way. Similarly, tools need to be as interoperable as
possible, especially in terms of how they can import and export data. “People already have databases,” lamented
one participant, “...it would be nice to have
easy ways of accessing the data.” The larger sentiment was not just
about access, but also about sharing the data between tools.
Conclusions: broader goals for digital humanities tools
The participants in our survey and our panel discussion showed a great deal of
enthusiasm for digital research tools and eagerness to engage with them.
Although their interest is demonstrable, unfortunately so is the insufficient
usability of many digital humanities tools. As our panelists indicated,
concerns about the extent to which digital humanities tool developers consider
what humanities scholars actually want to accomplish [
Warwick 2004] remain strong.
From explicit statements alone, it appears that our scholars are about equally
divided as to whether widespread adoption of digital tools should happen in
their field of history. Perhaps this reflects optimist and pessimist points of
view about technological change generally. But what resonates most strongly from
our panel discussion is that virtually all of the participants reported at some
point a glimmer of hope with respect to how digital tools might help them to
research in new ways and re-conceptualize their work. Yet their frustrations
over steep (or insurmountable) learning curves considerably dampened their
hopes. At the same time, our panelists made clear that, if interesting results
could be produced in a short time, they would be inspired to use the tools even
more. Perhaps such rough and ready use should be a more explicit aim of digital
humanities tool development. With the first wave of digital humanities tools
having produced excellent experimental and prototypical work, the fundamental
barrier to wider adoption of digital tools seems to lie now in quality
interfaces, accessible documentation, and expectations management.
Many tools now seem to downplay the importance of the user interface and
documentation with the implicit rationale that people who are really interested
in using the tool will figure out how to make the tool relevant to their own
work. Our survey and discussion shows that this is often not the case. There are
plenty of interested, curious, and technically capable humanities researchers
that have little time and patience for trial and error with new methodologies
when they are uncertain of their value. However, they remain receptive to the
possibilities offered by the tools. When considering a user sympathetic to the
promise of digital history tools, some leading by the nose is not only helpful,
but also necessary. An appropriate social contract is not just about writing
functional code, but also about creating an experience that helps mediate a
potentially uneasy relationship between data (regardless of representation) and
researcher.
Furthermore, though often seemingly outside the scope of a tool-building project,
tools should not only document their functionality, but also should explicitly
encourage scholars to approach their work in new ways. And in the midst of
embracing new kinds of methodological challenges and cutting-edge tool
development, tool designers must not forget the importance of a simple and clear
user interface. It must not only make it easy to use the tool in productive
ways, but also explain what the tool is for, provide examples of how it can be
used, and give non-technical details about how it works in order to minimize the
skepticism of black-box analysis. Such ease of use will hopefully bring
increased integration of technology in humanities instruction, especially in
terms of research methodologies and awareness of the importance of data
standardization so that humanists are better able to communicate with
archivists, librarians, and technologists who tirelessly facilitate data
exchange (whether analog or digital).
Taken as a whole, the tenor of the responses suggests that digital humanities
tool development projects ought to increase the scope of their imagined
audience. The audience for early tools was, and in some ways needed to be, other
technically sophisticated humanists. But the potential audience has broadened
considerably. Put another way, tool builders might consider both their tools and
their target audience as more transitory than revolutionary. Keeping the
cautiously optimistic user in mind would encourage a wider user base and
facilitate the traction of digital humanities methodologies. Traditional
humanists are willing to venture down the digital path, but they need to feel
comfortable along the way.
An emphasis on cultivating a broader audience and new relationships with them
must be a concern not only for tool builders, but also for funders of such
tools, who must ensure that tools adequately account for the time and expense of
quality interfaces and documentation. Prioritizing a wider audience can help
further adoption of tools in general, and thus further the acceptance of their
use and development as comprising legitimate scholarly work. Even scholars who
are disinclined to use such tools themselves ought to be able to understand what
it is that other people are doing with them and how their sophisticated use
might constitute valuable scholarship in itself.