William G.Thomas, II
Computing and the Historical Imagination
In the late 1960s and early 1970s historians seemed to think that their profession, the craft and art of history itself, was on the brink of change. Everywhere one looked the signs were unmistakable. A kind of culture war broke out in the profession and a flurry of tense conference panels, public arguments, and roundtables took place with subtitles, such as "The Muse and Her Doctors" and "The New and the Old History." This culture war pitted the "new" history, largely influenced by social science theory and methodology, against the more traditional practices of narrative historians. The "new" historians used computers to make calculations and connections never before undertaken, and their results were, at times, breathtaking. Giddy with success, perhaps simply enthusiastic to the point of overconfidence, these historians saw little purpose in anyone offering resistance to their findings or their techniques. When challenged at a conference, more than one historian responded with nothing more than a mathematical equation as the answer. Computers and the techniques they made possible have over the years altered how many historians have understood their craft. To some they have opened the historical imagination to new questions and forms of presentation, while to others they have instead shuttered the historical imagination, at best limiting and channeling historical thinking and at worst confining it to procedural, binary steps. This chapter traces where the historical profession has come in the years since these professional debates and tries to assess how computing technologies have affected the discipline and how they will shape its future scholarship.
Not all historians in the 1960s and 1970s considered the computer the future of the profession. One railed against worshiping "at the shrine of that bitch goddess QUANTIFICATION." Another, Arthur Schlesinger, Jr., one of America's foremost historians, wrote in reply to the rising confidence in cliometrics and quantitative history, "Almost all important questions are important precisely because they are not susceptible to quantitative answers" (Swierenga 1970: 33). Other critics of quantitative methods took aim not at the methods these historians used but instead questioned whether the "armies" of graduate students needed to develop large-scale projects and the technicians to maintain them were worth the cost. One prominent British social historian, Lawrence Stone, disparaged the contributions of the new history and the costs it entailed: "It is just those projects that have been the most lavishly funded, the most ambitious in the assembly of vast quantities of data by armies of paid researchers, the most scientifically processed by the very latest in computer technology, the most mathematically sophisticated in presentation, which have so far turned out to be the most disappointing." These sentiments are still widely held in the discipline and at the heart of them lay fundamental disagreements over the practice and method of history.
In the United States the greatest fight in the contest over computational methods in history took place in 1974 with the publication of Robert Fogel and Stanley Engerman's Time on the Cross: The Economics of American Negro Slavery. The book was a magisterial work of quantitative methodology and historical analysis, and it came in two volumes, one of which was dedicated entirely to quantitative methods and data. Fogel and Engerman admitted in their prologue that the book "will be a disturbing one to read." They also admitted that cliometrics, or quantitative history, had limitations and that "there is no such thing as errorless data" (Fogel and Engerman 1974: 8, 10). Fogel and Engerman concentrated their study on some very hotly contested issues: the economic profitability of slavery, the economic success of the South in the years before the Civil War, and the relative productivity of slave and free agriculture.
Time on the Cross received extensive criticism for two main reasons. First, the book rested so much of its key findings and interpretations on purely quantitative analysis and addressed purely economic questions, largely ignoring the textual and qualitative analysis of other historians as well as the social and political context of slavery. Second, the book addressed one of the most explosive and challenging subjects in American history – slavery. It seemed everyone was interested in the subject. Fogel and Engerman appeared on national television and in Time magazine, and were reviewed in nearly every newspaper and magazine. The publication of the methods and evidence volume alone was enough to intimidate many scholars.
C. Vann Woodward, a distinguished historian of the American South, in an early review of the book was understanding of the authors but clearly concerned about where their computers had led (or misled) them. He noted that "the rattle of electronic equipment is heard off stage, and the reader is coerced by references to Vast research effort involving thousands of man and computer hours' and inconceivable mountains of statistical data." Woodward let it be known what the stakes were: "The object of the attack is the entire 'traditional' interpretation of the slave economy." Woodward could hardly believe the line of reasoning that led Fogel and Engerman to assert that on average only "'2 percent of the value of income produced by slaves was expropriated by their masters,' and that this falls well within modern rates of expropriation." It was one of many disturbingly cold and skeptically received findings that the cliometricians put forward. Still, Woodward concluded that "it would be a great pity" if the controversy enflamed by Fogel and Engerman's conclusions were to "discredit their approach and obscure the genuine merits of their contribution" (Woodward 1974).
Other reviewers were unwilling to concede any ground to the cliometricians. Withering criticism began shortly after the book was published, and it was led by Herbert Gutman, whose Slavery and the Numbers Game (1975) put forward the most devastating attack. As if to mock the cliometricians' elaborate reliance on numerical analysis and technical jargon, Gutman consistently referred to the book as "T/C" and to the authors as "F + E." Gutman's blood boiled when Fogel and Engerman icily concluded about the frequency of slave sales that "most slave sales were either of whole families or of individuals who were at an age when it would have been normal for them to have left the family." Gutman focused not on the statistical accuracy or the techniques of the cliometricians, but on the implications of their assumptions, the quality of their evidence, and how they used evidence. "Computers are helpful", Gutman pointed out, but not necessary for understanding the assumptions behind the statistical evidence (Gutman 1975: 3). Much of his critique faulted Fogel and Engerman's model of analysis for making almost no room for enslaved persons' agency, and instead making consistent and deeply embedded assumptions that everything in the enslaved person's life was directly subject to control and action by the slave holder. Gutman suggested that Fogel and Engerman's methodological, statistical, and interpretative errors all consistently aligned to produce a deeply flawed book, one that depicted the plantation South and slavery in a benign, even successful, light. In the end, the Fogel and Engerman finding that the typical enslaved person received less than one whipping per year (0.7, to be exact) helped drive home Gutman's critique. The number was presented as relatively minimal, as if such a thing could be counted and its effect established through quantification. Gutman pointed out that the really important measure was to account for the whip as an instrument of "social and economic discipline." The whipping data for Time on the Cross came from one plantation, and when Gutman re-examined the data he found that "a slave – 'on average' – was whipped every 4.56 days" (1975: 19).
After Time on the Cross, advocates of numerical analysis and computing technology found themselves on the defensive. But this was not always the case. At the end of World War II in 1945 Vannevar Bush, the Director of the Office of Scientific Research and Development and one of the United States' leading scientists, tried to envision how the advances in science during the war might be most usefully directed. In an Atlantic Monthly essay, titled "As We May Think", Bush turned to examples from historical inquiry to make his key point – technology might be turned to the possibilities for handling the growing mass of scientific and humanistic data. The problem Bush described seems only more pressing now: "The investigator is staggered by the findings and conclusions of thousands of other workers – conclusions which he cannot find the time to grasp, much less remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial" (Bush 1945).
Bush's vision was for a machine he called "the memex", a strikingly prescient description of a networked desktop computer. The machine would enable a scholar to map what Bush called a "trail" through the massive and growing scholarly record of evidence, data, interpretation, and narrative. He wanted machines to make the same kinds of connections, indeed to emulate the human mind with its fantastic power of association and linkage. Bush's principal examples for the memex's applications were spun out of history – a research investigating the origins of the Turkish longbow – and he considered historians strong candidates to become "a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record." As historians went about their noble work, Bush thought, they would leave nothing hidden from view, instead producing scholarship that was intricately connected in ways that could be accessed, replicated, and extended: "The inheritance from the master becomes, not only his additions to the world's record, but for his disciples the entire scaffolding by which they were erected."
Some American historians were already working in the ways Bush described. They were using Hollerith (IBM) punchcards and undertaking large-scale analysis of a wide array of records. Frank Owsley and Harriet Owsley in their landmark 1949 study of the Old South used statistical methods to make linkages across disparate federal census records and to create new quantitative datasets to analyze the class structure of the region. Frank Owsley and a large number of his graduate students worked on thousands of manuscript census returns for several counties from Alabama, Mississippi, Georgia, and Tennessee, and linked record-by-record the individuals and households in the population, slave owners, and agricultural schedules. Owsley's work challenged the idea that the planter elite dominated the South and instead suggested that a plain folk democracy characterized the region. The questions Owsley asked demanded methods of computational linking and quantitative analysis. Other historians in the 1940s and 1950s also worked on large statistical projects, including Merl Curtis analysis of the American frontier and William O. Aydelotte's study of the British Parliament in the 1840s.
In retrospect, we can see three distinct phases in the ways historians have used computing technologies. Owsley, Curti, Aydelotte, and a few others in the 1940s were part of the first phase of quantitative history. These historians used mathematical techniques and built large datasets. A second phase began in the early 1960s and was associated with an emerging field of social science history. The "new" social, economic, and political history concentrated on mobility, political affiliation, urbanization, patterns of assimilation, and legislative behavior. It included historians using a range of statistical techniques, such as Lee Benson and Allan Bogue of the so-called "Iowa school" of quantification, as well as Olivier Zunz, Michael F. Holt, Steven Thernstrom, J. Morgan Kousser, and many others. Many "new" social, political, and economic historians drew on the massive data collected in the Inter-University Consortium for Political and Social Research (ICPSR) which was founded at the University of Michigan in 1962 with support from the American Historical Association and the American Political Science Association to collect and make available historical data on county elections, census data, congressional roll call votes, and other miscellaneous political files. The computer made possible, or at least more practical and compelling, the study of history "from the bottom up." Political historians examined the influences at play in voting, not just the rhetoric of a few leaders; social historians found patterns to describe the world of average people; and economic historians developed models to account for multiple variables of causation.
The technology was alluring, but historians worried about the costs of the research and the time and training it took. Robert P. Swierenga, a historian with considerable experience in quantitative methods, confessed in a 1974 article that "to compute the Yules Q coefficient statistic for congressional roll call analysis" might require five hours of mainframe computer time to run the analysis, not to mention the time and cost of coding the 100,000 data cards needed for the computation. Swierenga was skeptical that "desk consoles" would solve the problem because historians' data were simply too large for the machines of the foreseeable future (Swierenga 1974: 1068).
The PC revolution and the rise of the Internet, along with the Moore's law exponential increase in speed and disk capacity of computing technology, led to the third phase. In this current phase, the networking capacity of the Internet offers the greatest opportunities and the most challenges for historians. At the same time as the Internet has created a vast network of systems and data, personal computers and software have advanced so far that nearly every historian uses information technologies in their daily work.
Some significant methodological issues have emerged with the rise of PCs and "off the shelf" software. Manfred Thaller, an experienced quantitative historian, argued in 1987 that database programs suited for businesses simply did not work for historians, who continually face "fuzzy" or incomplete data. The most obvious limitation that historians encounter with commercial database programs is the date function, since most programs cannot interpret or query nineteenth-century or earlier dates and a nest of problems in date functionality accompany any attempt to use historical dating in these programs. The nature of historical sources and the complexity of historical relationships led Thaller and others to challenge the easy comfort historians have with these programs. Thaller has been a voice crying in the wilderness, as historians in the 1980s and 1990s snapped up commercial database and spreadsheet programs with little hesitation and bent them to their needs. Meanwhile, Thaller and other historians at Gottingen developed a software system, KLEIO, for the needs of historical researchers and scholars. KLEIO's data processing system differs from standard packages because of its data structure. The software allows for three forms of related information: documents, groups, and elements. KLEIO handles a wide range of "fuzzy" data, historical dating systems, and types of documents (see Denley and Hopkin 1987: 149; Harvey and Press 1996: 190–3).
Throughout the 1980s historians, especially in Europe, refined the use of databases for historical computing. In the process they questioned how these tools affected their interpretations and whether the relational database models simply could not account for non-tabular data such as long texts, sound, images, and maps. Manfred Thaller's system tried to separate what he called the "knowledge environment" from the system environment or software, so that ideally the meaning of the records in a database remained independent of its source information. Database design for historical computing led Thaller and other quantitative historians to make distinctions between designs that build around the method of analysis or model from those which build around the source. In the former, historians extract information out of sources from the data's original context into a set of well-defined tables arranged to allow for predetermined queries. In the latter, historians concentrate on capturing the original text or data and its entire source context, only later making decisions about analysis and organization of the data. Thaller's purist position on the use of databases in historical computing has gained attention in the field and sharpened the debate, but it has not slowed down the widespread use of commercial software for historical analysis. Most historians have taken a far more pragmatic approach.
Despite the fundamental changes in information technologies and in historians' use of them, the recovery from the "battle royale" over Time on the Cross has been slow for those American historians working with computers. Manfred Thaller and the debates over database methodology and theory remained well out of the mainstream of American history in the 1980s. Twenty years after Time on the Cross, when Daniel Greenstein published A Historian's Guide to Computing (1994), he felt compelled to convince his readers that computers should not be "tarred with the same brush as the social science historians" (1994: 1). His treatment of regression analysis amounted to less than a page, while e-mail, bibliographic software, and note-capturing programs took up more than a chapter. Similarly, Evan Mawdsley and Thomas Munck, in their Computing for Historians: An Introductory Guide (1993), cautioned that they were "not champions of 'cliometrics', 'quantification', the 'new' history, 'scientific history', or even what is called 'social science history'" The idea that some historians pushed the profession to use computers and quantification techniques struck these authors as "the tail wagging the dog" (Mawdsley and Munck 1993: xiii). J. Morgan Kousser, in his review of social science history in the 1980s, admitted that most historians were moving abruptly away from quantitative methods. Time and distance tempered some critics, and by 1998 Joyce Appleby in her presidential address to the American Historical Association remarked that the quantifying social historians had "immediate, substantive, conceptual, and ideological effects" on the profession. Appleby considered them responsible for an important shift in the practice of history from unexplained to "explicit" assumptions about research methodology and from descriptive to analytical narrative (Appleby 1998: 4).
British historians, however, have established a long and comparatively polite respect for historical computing. The Association for History and Computing (AHC) was founded at Westfield College, University of London, in 1986, and has sponsored several large conferences and published proceedings. At the 1988 meeting, Charles Harvey speculated on the reasons for the strong interest in AHC among British historians. Chief among them, according to Harvey, is the "strength of support for empirical, scientific, developmental history in Britain" (Mawdsley et al. 1990: 206). Harvey described the British historical profession as focused on process not outcomes, rooted in scientific inquiry and only marginally concerned with theoretical debates. British historical scholarship was characterized by a focus on the importance of primary sources, the application of scientific notions in historical analysis, an emphasis on the specialized skills of the profession, the wide gulf between historians and social scientists, and the emphasis on applied skills and research methods in teaching students. Few of these characteristics describe the American historical profession and this may explain the thriving interest in "history and computing" in England.
Despite the success of and interest in AHC, some British historians have viewed computing technology variously as the handmaiden of postmodernism, as a witless accomplice in the collapse of narrative, and as the silent killer of history's obligation to truth and objectivity. One recent British historian argued: "The declining importance of the so-called grand narratives of national and class histories, and the fragmentation and loss of cultural authority of scholarly history in the face of increasingly diffuse popular and political uses of 'history,' cannot be separated from the impact of the new technologies" (Morris 1995: 503). Another suggested that when a historian "embarks on a statistical analysis he crosses a kind of personal Rubicon" (Swierenga 1974: 1062). Across that divide, in this view, the historian finds restrictions imposed by the software that defy the historian's allegiance to basic craft and adherence to evidence.
If Time on the Cross soured many American historians on computational methods, the Philadelphia Social History Project helped sustain them for a time. The project began in the 1970s under the leadership of Theodore Hershberg and took over a decade to complete. At first, skeptical critics considered it a painful example of the tail wagging the dog, long on money, time, and detail and short on results. Hershberg and his colleagues at the University of Pennsylvania used the computer to make linkages between a vast array of historical data and set out to establish guidelines for large-scale historical relational databases. The project took in nearly $2 million in grants and, in the process, offered historians a model for large-scale research and interdisciplinary activity. Numerous dissertations and theses drew on its data collection, and great expectations for the project were widely claimed. Still, the project ended with what many considered a thud – the publication in 1981 of a book of essays by sixteen historians working with the Philadelphia Social History Project that offered no larger synthesis for urban history. Critics faulted the project and its authors for over-quantification, for paying too much attention to cultivating an interdisciplinary "research culture", and for allowing federal dollars to push a research agenda about "public policy" matters removed from history.
While historians continued to ponder the pros and cons of quantitative methods and while the profession increasingly turned to cultural studies, or took the "linguistic turn", as some have called the move toward the textual and French theory, computer scientists were hammering out a common language for shared files over the Internet. The World Wide Web's opening in 1993 with the creation of HTML and browser technologies offered historians a new medium in which to present their work. One of the first historians to understand the implications of the Web for scholarship, teaching, and historical study generally was Edward L. Ayers at the University of Virginia. He was already embarking on a computer-aided analysis of two communities in the American Civil War when he first saw the Mosaic(TM) browser and the Web in operation at the Institute for Advanced Technology in the Humanities at the University of Virginia. Ayers immediately discarded his initial plans to distribute his research project on tape to libraries and instead shifted the direction entirely to the World Wide Web. As a result, Ayers's Valley of the Shadow Project was one of the earliest sites on the World Wide Web and perhaps the first work of historical scholarship on it.
From its inception the Valley of the Shadow Project was more than met the eye. The general public understood it as a set of Civil War letters, records, and other accounts. Students and teachers praised it for opening up the past to them and allowing everyone "to be their own historian." Scholars stood initially aloof, wondering what could possibly be so exciting about an electronic archive. Gradually historians began to pay attention to the Web and to the Valley of the Shadow Project in particular. Some historians saw the Project as perhaps a potentially troublesome upstart that threatened to change the narrative guideposts laid down in other media. James M. McPherson, one of the leading historians of the Civil War period whose work had done so much to influence Ken Burns's The Civil War, considered the Project potentially too narrow and argued in a major conference panel on the Valley Project that the communities Ayers had chosen were not representative of the North and South. McPherson, and other critics as well, were beginning to recognize that the digital medium allowed Ayers to create a thoroughly captivating, technically savvy, and wholly unexpected comparative approach to the Civil War, one so complex and interconnected that such a thing seemed impossible in more linear media such as film and books.
While Ayers was getting started on the Valley of the Shadow Project, another group of historians was already producing an electronic textbook for the American history survey course. The American Social History Project, based at City University of New York, included Roy Rosenzweig, Steve Brier, and Joshua Brown. Their Who Built America?, a CD-ROM of film, text, audio, images, and maps, aggressively and successfully placed social history, especially labor history, at the center of the national story. The CD-ROM won a major prize from the American Historical Association in 1994 and one reviewer called it a "massive tour-de-force, setting the standard for historians who aim to make their work accessible to broad audiences via multimedia" (Darien 1998). Other reviewers, for N-Net, speculated that Who Built America? was part of a larger trend toward CD-ROMs and multimedia history, an "experiment" that would undoubtedly inspire others and lead to new forms of scholarship and teaching (Frost and Saillant 1994). These reviewers, though, also listed the daunting system requirements to run the CD-ROM: "Macintosh computer running system 6.0.7 or higher; 4 megabytes of installed RAM in System 6 or 5 megabytes in System 7, with a minimum of 3–5 megabytes to allocate to HyperCard; hard disk with 7 megabytes of free space, or 8 if QuickTime and HyperCard must be installed; 13-inch color monitor; QuickTime-ready CD-ROM drive."
It turns out that Edward Ayers's early decision to produce the Valley of the Shadow Project for the World Wide Web was one of the keys to that Project's long-term success. While the team at the University of Virginia working with Ayers has produced CD-ROMs and Ayers himself is publishing a narrative book out of the electronic archive, it is the website that has reached millions of users and to which all of the other scholarly objects point. The CD-ROMs of the Valley of the Shadow and Who Built America? contain remarkable materials, but their self-contained systems and off-the-network approach hobbled them, and by the late 1990s the CD-ROM seemed a relic in the fast-moving technology marketplace. The World Wide Web offered connectivity and hypertext on a scale that the public demanded and that scholars were beginning to see as immensely advantageous. "Historians might begin to take advantage of the new media", Ayers wrote, "by trying to imagine forms of narrative on paper that convey the complexity we see in the digital archives." In his call for a "hypertext history", Ayers admitted that while the technology offers grand possibilities, even with the crude tools presently in use, there are significant barriers for historians. Ayers called hypertext history potentially a "culmination of a long-held desire to present a more multidimensional history and a threat to standard practice" (Ayers 1999).
All of the connectivity and digitization has opened up history and historical sources in unprecedented ways, yet the technology has not come without tensions, costs, and unexpected sets of alliances and demands for historians, educators, administrators, and the public. The opportunities of digital technology for history notwithstanding, Roy Rosenzweig, one of the leading scholars of the Who Built America? CD-ROM, and Michael O'Malley questioned whether professional historians can "compete with commercial operations" (Rosenzweig and O'Malley 1997: 152). Permission to pay for copyright, the costs of design and graphical layout, maintaining programming technologies and software all conspire to favor commercial publishing companies rather than professional historians. More recently, Rosenzweig has cautioned historians about the prospects of writing history in a world of information overload. "Historians, in fact, may be facing a fundamental paradigm shift from a culture of scarcity to a culture of abundance", Rosenzweig observed, while at the same time archivists and librarians warn that vast electronic records are being lost every day in government, business, and academic institutions, not to mention in homes, churches, schools, and nonprofits. Problems of "authenticity", Rosenzweig pointed out, plague digital preservation, and libraries and archives face skyrocketing costs and difficult choices. But the historians face equally demanding problems. "The historical narratives that future historians write may not actually look much different from those that are crafted today", according to Rosenzweig, "but the methodologies they use may need to change radically" (Rosenzweig 2003). The vast size of born digital electronic data collections and the interrelationships among these data present historians with a fundamental methodological issue, according to Rosenzweig. They will need tools and methods, perhaps borrowed from the tradition of social science history, to make sense of these records.
If historians face an unprecedented scale of information, they also encounter in the digital medium an unexpectedly versatile mode of presenting their work. Many historians are beginning to ask what can we expect historical scholarship to look like in the networked electronic medium of the Internet and what forms of historical narrative might be enhanced or enabled. Robert Darnton, past president of the American Historical Association, National Book Award finalist, and innovator in historical forms of scholarship, sketched out his ideas for the future of electronic publishing in a 1999 New York Review of Books essay, titled "The New Age of the Book." He was concerned in large part with the future of the book, especially the academic monograph, and the university presses that produce and publish them. Darnton considered history to be "a discipline where the crisis in scholarly publishing is particularly acute" (Darnton 1999). Books won prizes and sold fewer than 200 copies; academic presses struggled to stay out of the red, and authors, especially young scholars before tenure, frantically tried to publish their work. Darnton used Middle East Studies as his example of "an endangered field of scholarship" about which the public cared and thought little. Only a few years after Darnton's review article, the importance of Middle East Studies could hardly be argued, as the events of September 11, 2001, brought the field to the immediate attention of the American public. Darnton observed that the unpredictability of the market and the pressures on presses, tenure committees, and scholars seemed to conspire against the future of the academic book.
Darnton asked whether electronic publishing "could provide a solution." He outlined significant advantages for the field of history where connections among evidence were so eye-opening during research in the archive and often so difficult to reproduce in narrative. Darnton wished for a new form for historical scholarship: "If only I could show how themes crisscross outside my narrative and extend far beyond the boundaries of my book … instead of using an argument to close a case, they could open up new ways of making sense of the evidence, new possibilities of making available the raw material embedded in the story, a new consciousness of the complexities involved in construing the past" (Darnton 1999). Darnton cautioned against "bloating" the book and piling on appendages to narrative. Instead, he called for a trim pyramid of layers: summary at the top and documentation, evidence, commentary, and other materials below.
It is not yet clear how digital technology will affect the practice of history or whether historians will heed Darnton's call to consider the advantages of electronic publication. In a show of leadership Darnton offered historians an example of what new electronic scholarship might look like, publishing a dynamic essay in the American Historical Review (Darnton 2000). In recent years several historians are working toward common ends. Philip Ethington has written an electronic essay and presentation for the Web on the urban history of Los Angeles. Ethington "explores the hypothesis that the key concept in the search for historical certainty should be 'mapping' in a literal, not a metaphoric, sense" (Ethington 2000). His work includes a wide range of media and sources to create, or rather recreate, the "panorama" of the city. Ethington suggests that the website can be read like a newspaper, inviting readers to wander through it, skipping from section to section, and focusing on what strikes their interest. Motivated "simultaneously by two ongoing debates: one among historians about 'objective knowledge,' and another among urbanists about the depthless postmodern condition", Ethington's electronic scholarship grasps the "archetype of 'hyperspace'" to address these concerns.
Finally, Ayers and this author are working on an American Historical Review electronic article, titled "The Differences Slavery Made: A Close Analysis of Two American Communities." This piece of electronic scholarship operates on several levels to connect form and analysis. First, it allows one to reconstruct the process by which our argument was arrived at, to "follow the logic" of our thinking, in effect to reconstruct the kind of "trails" that Vannevar Bush expected the technology to allow historians. This electronic scholarship also uses spatial analysis and spatial presentation to locate its subjects and its readers within the context of the historical evidence and interpretation. And it presents itself in a form that allows for unforeseen connections with future scholarship.
Historians will increasingly use and rely on "born digital" objects for evidence, analysis, and reference, as libraries and other government agencies increasingly digitize, catalogue, and make accessible historical materials (see Rosenzweig 2003). Some of these materials are hypertextual maps, others are annotated letters, edited video, oral histories, or relational databases. These digital objects vary widely according to their origin, format, and purpose. A born digital object is one created expressly for and in the digital medium, and therefore is more than a digital replication of an analogue object. For these objects, such as a reply to an email message, there is no complete analogue surrogate and as a consequence historians will need to understand not only what these objects explain but also how they were created. The electronic text, for example, marked up in Text Encoding Initiative (TEI) language becomes transformed in the process of both digitization and markup. Its unique markup scheme, as well as the software and hardware at both the server and client ends, affect how the text behaves and how its readers encounter it. Literary scholars, such as Espen Aarseth for example, have widely discussed the nature of narrative in cyberspace, and Aarseth calls cybertexts "ergotic" to distinguish them as non-linear, dynamic, explorative, configurative narratives (Aarseth 1997: 62). For historians the first stage in such textual developments for narrative have already been expressed in a wide range of digital archives. While these archives might appear for many users as undifferentiated collections of evidence, they represent something much more interpreted. Digital archives are often themselves an interpretative model open for reading and inquiry, and the objects within them, whether marked-up texts or hypermedia maps, derive from a complex series of authored stages. What Jerome McGann called "radiant textuality", the dynamic multi-layered expressions that digital technologies enable, applies to huge edited digital texts as well as to discrete objects within larger electronic archives (see McGann 2001: for example 151–2, 206–7).
The step from these archives to second-order historical interpretation necessarily involves the incorporation and explication of born digital objects, and "The Differences Slavery Made" offers an example of scholarship that emerges from and in relationship to born digital scholarly objects. The work fuses interpretative argument with the digital resources of the Valley of the Shadow Project, and it is designed for a future of networked scholarship in which interpretation, narrative, evidence, commentary, and other scholarly activities will interconnect. The resulting piece is intended to fuse form and argument in the digital medium. The authors propose a prismatic model as an alternative to Darnton's pyramid structure, one that allows readers to explore angles of interpretation on the same evidentiary and historiographical background. The prismatic functionality of the article offers to open the process of historical interpretation to the reader, providing sequential and interrelated nodes of analysis, evidence, and their relationship to previous scholarship. As important, the article tries to use its form as an explication of its argument and subject. Slavery, in other words, must be understood as having no single determinative value, no one experience or effect; instead, its refractive powers touched every aspect of society. The article's form – its modules of refracted analysis, evidence, and historiography – is meant to instruct and carry forward the argument.
Given the sweeping changes within the field of history and computing, we might ask what digital history scholarship might tackle in the future. A range of opportunities present themselves. The most anticipation and attention currently surround what is loosely called "historical GIS." Historical GIS (geographic information systems) refers to a methodology and an emergent interdisciplinary field in which computer-aided spatial analysis is applied to archaeology, history, law, demography, geography, environmental science, and other areas (see Knowles 2002). Historians are building large-scale systems for rendering historical data in geographic form for places across the world from Great Britain to Tibet, and they are finding new answers to old questions from Salem, Massachusetts, to Tokyo, Japan. Legal scholars have begun to examine "legal geographies" and to theorize about the implications of spatial understandings and approaches to legal questions (see Blomley 1994). These scholars seem only a step away from adopting historical GIS approaches to their studies of segregation, slavery, race relations, labor relations, and worker safety. Environmental historians and scientists, too, have developed new approaches to human and ecological change, examining subjects ranging from salt marsh economies and cultures in North America to the character and culture of Native American Indian societies in the river basins of the Chesapeake. Taken together these efforts represent remarkable possibilities for an integrated and networked spatial and temporal collection.
But historians might dream up even more highly interpretive and imaginative digital creations. Extending historical GIS, they might attempt to recreate "lost landscapes" in ways that fully allow readers to move and navigate through them. These four-dimensional models might restore buildings, roads, and dwellings to historic landscapes as well as the legal, economic, social, and religious geographies within them. Networks of information, finance, trade, and culture might also find expression in these models. Readers might do more than query these datasets; they might interact within them too, taking on roles and following paths they could not predict but cannot ignore. Readers of these interpretations will have some of the same questions that the critics of earlier computer-aided history had. The goal for historians working in the new digital medium needs to be to make the computer technology transparent and to allow the reader to focus his or her whole attention on the "world" that the historian has opened up for investigation, interpretation, inquiry, and analysis. Creating these worlds, developing the sequences of evidence and interpretation and balancing the demands and opportunities of the technology will take imagination and perseverance.
References for Further Reading
Aarseth, Espen (1997). Cybertext: Perspectives on Ergotic Literature. Baltimore and London: Johns Hopkins University Press.
Appleby, Joyce (1998). The Power of History, American Historical Review 103, 1 (February): 1–14.
Ayers, Edward L. (1999). History in Hypertext. Accessed April 5, 2004. At http://www.vcdh.virginia.edu/Ayers.OAH.html.
Barzun, J. (1972). History: The Muse and Her Doctors. American Historical Review 77: 36–64.
Barzun, J. (1974). Clio and the Doctors: Psycho-History, Quanta-History and History. Chicago: University of Chicago Press.
Blomley, Nicholas K. (1994). Law, Space, and the Geographies of Power. New York: Guilford Press.
Bogue, A. G. (1983). Clio and the Bitch Goddess: Quantification in American Political History. Beverly Hills, CA: Sage Publications.
Bogue, A. G. (1987). Great Expectations and Secular Depreciation: The First 10 Years of the Social Science History Association. Social Science History 11.
Burton, Orville, (ed.) (2002). Computing in the Social Sciences and Humanities. Urbana and Chicago: University of Illinois Press.
Bush, Vannevar (1945). As We May Think. Atlantic Monthly (July).
Clubb, J. M. and H. Allen (1967). Computers and Historical Studies, Journal of American History 54: 599–607.
Darien, Andrew (1998). Review of Who Built America? From the Centennial Celebration of 1876 to the Great War of 1914. Journal of Multimedia History 1, 1 (Fall). At http://www.albany.edu/jmmh.
Darnton, Robert (1999). The New Age of the Book. New York Review of Books (March): 18.
Darnton, Robert (2000). An Early Information Society: News and the Media in Eighteenth-century Paris. American Historical Review 105, 1 (February). Accessed April 5, 2004. At http://www.historycooperative.org/journals/ahr/105.1/ah000001.html.
David, Paul et al. (1976). Reckoning with Slavery: A Critical Study in the Quantitative History of American Negro Slavery. New York: Oxford University Press.
Degler, Carl (1980). Remaking American History. Journal of American History 67, 1: 7–25.
Denley, Peter and Deian Hopkin (1987). History and Computing. Manchester: Manchester University.
Press, Denley, Peter, Stefan Fogelvik, and Charles Harvey (1989). History and Computing II. Manchester: Manchester University Press.
Erikson, C. (1975). Quantitative History. American Historical Review 80: 351–65.
Ethington, Philip J. (2000). Los Angeles and the Problem of Urban Historical Knowledge. American Historical Review (December). At http://www.usc.edu/dept/LAS/history/historylab/LAPUHK/.
Fitch, N. (1984). Statistical Fantasies and Historical Facts: History in Crisis and its Methodological Implications. Historical Methods 17: 239–54.
Fogel, R. W. and G. R. Elton (1983). Which Road to the Past? Two Views of History. New Haven, CT: Yale University Press.
Fogel, R. W. and Stanley Engerman (1974). Time on the Cross: The Economics of American Negro Slavery. London: Little, Brown.
Frost, Carol J. and John Saillant (1994). Review of Who Built America? From the Centennial Celebration of 1876 to the Great War of 1914. H-Net Reviews. Accessed April 5, 2004. At http://www.h-net.org/mmreviews/showrev.cgi?path=230.
Greenstein, Daniel I. (1994). A Historian's Guide to Computing. Oxford: Oxford University Press.
Gutman, Herbert (1975). Slavery and the Numbers Game: A Critique of Time on the Cross. Urbana: University of Illinois Press.
Harvey, Charles and Jon Press (1996). Databases in Historical Research: Theory, Methods, and Applications. London: Macmillan Press.
Himmelfarb, G. (1987). The New History and the Old. Cambridge, MA: Harvard University Press.
Jenson, Richard (1974). Quantitative American Studies: The State of the Art. American Quarterly 26, 3: 225–220.
Knowles, Anne Kelly, (ed.) (2002). Past Time, Past Place: GIS for History. Redlands, CA: ESRI.
Kousser, J. Morgan (1989). The State of Social Science History in the late 1980s. Historical Methods 22: 13–20.
McGann, Jerome (2001). Radiant Textuality: Literature after the World Wide Web. New York: Palgrave.
Mawdsley, Evan and Thomas Munck (1993). Computing for Historians: An Introductory Guide. Manchester: Manchester University Press.
Mawdsley, Evan, Nicholas Morgan, Lesley Richmond, and Richard Trainor (1990). History and Computing III: Historians, Computers, and Data, Applications in Research and Teaching. Manchester and New York: Manchester University Press.
Middleton, Roger and Peter Wardley (1990). Information Technology in Economic and Social History: The Computer as Philosopher's Stone or Pandora's Box? Economic History Review (n.s.) 43, 4: 667–96.
Morris, R. J. (1995). Computers and the Subversion of British History, journal of British Studies 34: 503–28.
Rosenzweig, Roy (1994). "So What's Next for Clio?" CD-ROM and Historians. Journal of American Historians 81, 4 (March): 1621–40.
Rosenzweig, Roy (2003). Scarcity or Abundance? Preserving the Past in a Digital Era. American Historical Review 108, 3 (June): 735–62.
Rosenzweig, Roy and Michael O'Malley (1997). Brave New World or Blind Alley? American History on the World Wide Web. Journal of American History 84, 3 (June): 132–55.
Rosenzweig, Roy and Michael O'Malley (2001). The Road to Xanadu: Public and Private Pathways on the History Web. Journal of American History 88, 2 (September): 548–79.
Shorter, Edward (1971). The Historian and the Computer: A Practical Guide. Englewood Cliffs, NJ: Prentice Hall.
Swierenga, Robert P., (ed.) (1970). Quantification in American History. New York: Atheneum.
Swierenga, Robert P. (1974). Computers and American History: The Impact of the "New" Generation. journal of American History 60, 4: 1045–220.
Woodward, C. Vann (1974). The Jolly Institution. New York Review of Books, May 2.