DHQ: Digital Humanities Quarterly
Volume 17 Number 2
2023 17.2  |  XMLPDFPrint

Unpacking tool criticism as practice, in practice


Thanks to easy-to-use data analysis tools and digital infrastructures, even those humanities scholars who lack programming skills can work with large-scale empirical datasets in order to disclose patterns and correlations within them. Although empirical research trends have existed throughout the history of the humanities [Bod 2013], these recently emergent possibilities have revived an empiricist attitude among humanities scholars schooled in more critical and interpretive traditions. Replying to calls for a critical digital humanities [Berry and Fagerjord 2017] [Dobson 2019], this paper explores “tool criticism” [Van Es et al. 2018] – a critical attitude required of digital humanities scholars when working with computational tools and digital infrastructures. First, it explores tool criticism as a response to instrumentalism in the digital humanities and proposes it to be part of what a critical digital humanities does. Second, it analyses tool criticism as practice, in practice. Concretely, it discusses two critical making–inspired workshops in which participants explored the affordances of digital tools and infrastructures and their underlying assumptions and values. The first workshop focused on “games-as-tools” [Werning 2020]. Participants in the workshop engaged with the constraints, material and mechanical, of a card game by making modifications to it. In the second workshop, drawing on the concept of “digital infrapuncture” [Verhoeven 2016], participants examined digital infrastructure in terms of capacity and care. After first identifying “hurt” in a chat environment, they then designed bots to intervene in that hurt and offer relief.


The digitalization and datafication of all aspects of our cultural practices and social interactions have created new opportunities for research. Thanks to easy-to-use data analysis tools and digital infrastructures, even those humanities scholars who lack programming skills can work with large-scale empirical datasets in order to disclose patterns and correlations within them. Although empirical research trends have existed throughout the history of the humanities [Bod 2013], these recently emergent possibilities have revived an empiricist attitude among some humanities scholars schooled in more critical and interpretive traditions. This development has strengthened the position of those who have long thought that the humanities needed more “objective” means of inquiry to better substantiate their interpretive claims. It has been argued, however, that some of the interpretive humanistic tradition's strengths have been relinquished as a result. As Drucker and Svensson note, scholars engaged in digital projects sometimes leave their critical sensibilities behind: “such projects can demonstrate more positivism than the positivism we often (and sometimes erroneously) associate with science and technology” [Drucker and Svensson 2016, par. 9]. The problem, in part, stems from the fact that the assumptions and concepts of the computational methods embedded in tools, often derived from the empirical sciences, are generally left unscrutinized [Dobson 2019, 6]. These tools become the stage of “an encounter between two sets of epistemic traditions – hermeneutic and empirical” [Masson 2017, 28] and raise questions about methodology.
Prompted by this “methodological moment” (Scheinfeldt in [Rieder and Röhle 2017, 210]) within the humanities, there has been a call for “tool criticism,” as it has been termed ([Koolen et al. 2019] [Van Es et al. 2018] and [Van Es et al. 2021], [Van Geenen 2020]). These authors have drawn attention to the need for digital humanists to critically reflect on the impact of their research tools, which have computational methods embedded within them, on knowledge production. Rather than as mere instruments, these tools are envisioned as a set of conditions; their affordances are at once enabling and constraining practices. The proposal that our research tools are caught up in the epistemic process is not new [Baird 2004] [Latour and Woolgar 1986]. However, as a result of the computational turn and the proliferation of easy-to-use tools for data analysis and visualisation, there is a need to put the matter back on the agenda of the humanities and encourage further discussion on digital methodologies. Tool criticism takes part in a response to the call for a third-wave digital humanities that develops a programme of criticism with regard to the computational [Berry 2011], and it moves the discussion forward.
In this paper I seek to make tool criticism a bit more concrete. To do so I explore tool criticism as a response to instrumentalism in the digital humanities and proposes it to be part of what a critical digital humanities does. Subsequently, I explore it as a practice, and in practice. More specifically, I discuss two critical making–inspired workshops in which participants analysed the affordances of digital tools and infrastructures and engaged with their underlying assumptions and values. The first workshop approached “games-as-tools” [Werning 2020]. Participants modified a card game, playing with its material and mechanical constraints in order to argue with its claims and assumptions. The second workshop, drawing on the concept of “digital infrapuncture” [Verhoeven 2016], asked participants to examine digital infrastructure in terms of capacity and care. They first identified pain and stress in a chat environment, caused by the norms and values embedded in the design of the infrastructure (e.g. the platform discriminates against or even excludes certain users and practices). The participants then designed bots that intervened and offered relief in the system. Finally, reflection on the sort of tool criticism performed in the workshops enables an exploration of the critical and reflective attitude required of digital humanities scholars when conducting research with computational tools and digital infrastructures.

Against Instrumentalism

Research in the digital humanities is supported by many different computational tools and digital infrastructures. Examples of such tools include Excel, Tableau, Python, Google NGram Viewer, ImagePlot and the Digital Methods Initiative Issue Crawler. Whereas tools are often used individually or by small teams and are oriented towards solving particular tasks, infrastructures work on a larger scale and combine multiple functions and applications. Infrastructures can be defined as “the relationships, interactions and connections between people, technologies, and institutions (that help data flow and be useful” [Parsons 2015]. Examples of digital infrastructures relevant to digital humanities research include Getty, DBpedia, Europeana, DARIAH, and HuNi. A challenge in working with tools and infrastructures is that they have heterogenous development, funding and use contexts. These are all factors that impact their stability and appropriateness for research aims (see [Van Es et al. 2021] ).
The computational turn has created a flood of empirical data, which can now be wrangled with in many easy-to-use tools. In the humanities we currently encounter a “renewed positivist dream” in which computational tools are applied uncritically [Dobson 2019, 3]. In an earlier contribution to this journal, Drucker and Svensson find that “[h]umanists continue to be seduced by tools to whose workings they give limited attention, so they execute their projects (e.g. in network analysis software) without knowing how the results were generated” [Drucker and Svensson 2016, par. 9] . This phenomenon has also been labelled “blunt instrumentalism” [Tenen 2016]. Here computational tools, are treated as transparent and neutral. Their affordances and embedded methods, though exercising an impact on the epistemological process, are not critically analysed.
Patrik Svensson underscores the instrumental tendencies in the humanities, pointing to the propensity of “think[ing] about infrastructure as placeless, immaterial, and neutral” [Svensson 2015, 349]. He puts forth the view that our tools and infrastructures in fact embody “ways of perceiving, interrogating, and enacting the world” [Svensson 2015, 342–43]. To avoid being governed by a scientific and engineering paradigm that now dominates the design of digital infrastructures, Svensson suggests that humanities scholars ought to become more involved in building. This way, models of infrastructure can be developed that are based on the humanities' own needs and desires (see also [Drucker 2012]).
Inspired by Lev Manovich's assertion that a prototype is a theory, Alan Galey and Stan Ruecker [Galey and Ruecker 2010] propose that experimental digital prototypes can contain arguments. They state that “digital artifacts have meaning, not just utility”, and argue that these efforts should be considered peer-reviewable forms of research. Allington et al. [Allington et al. 2016], however, have been critical of the idea that the building of computational tools can be a substitute for scholarly writing. They point out how Computer Science departments have never awarded PhD degrees on the basis of programming competence alone. Relevant to our discussion here is the idea that tools can embody the perspectives of their makers. However, it is important to point out that the assumptions and values in tools are not necessarily always the product of intent.
Responding to the concerns about instrumentalism, Mathieu Jacomy, one of the developers of Gephi, lashes out at academics. In a blog post he makes the following plea:

Please stop summarizing your detailed argumentation down to “tools influence us because of presuppositions built into them.” Nobody stuffed your tool. That is not how it works. Most tool makers do not really know what they are doing – they just experiment. They do not try to influence you – they probably do not care about you.  [Jacomy 2020, np]

Jacomy goes on to argue that tools arrive as accidents and not as “the Trojan horses of methodological imperialists” [Jacomy 2020, np]. While not every affordance of a tool is intentional or connected to the implementation of methods (some can, for instance, concern functionality), he overlooks how all choices reflect a particular perspective, contain certain assumptions and have implications for how knowledge is ultimately produced.
Against the background of instrumentalism, Drucker and Svensson find there is insufficient attention to the material support of knowledge production and that this should be addressed by the critical sensibilities of humanities scholars [Drucker and Svensson 2016]. They examine “middleware” as a concept enabling attention to be paid to “the ways tools structure our arguments or express thinking in protocols programmed into these platforms” ([Drucker and Svensson 2016, par. 1], my emphasis). Looking at what I have emphasized here in italics, an important differentiation is made between the basic affordances of tools and the affordances implemented in software by design [Schäfer 2011, 56]. Not only then do tools embed methods, Drucker and Svensson here point to the importance of analysing the material properties that enable and constrain use and influence the biases and assumptions behind them. The popular network visualisation and analysis software package Gephi, for instance, lacks the ability to trace the history of modifications made to network visualisations (see [Van Geenen 2020]).

Tool Criticism as Critical Digital Humanities

In light of a more general criticism charging that the digital humanities tend to be anti-interpretive (Allington et al., 2016), there have been calls for a “critical” digital humanities ([Berry 2011] [Berry and Fagerjord 2017] [Dobson 2019]) which incorporates critical and interpretive traditions into the digital humanities. Conducting tool criticism, raising questions about how our computational tools are caught up in the epistemic process, needs to be part of what a critical digital humanities (CDH) does. Tool criticism is necessary particularly because these tools often bear assumptions and concepts derived from the empirical sciences ([Drucker 2012, 85–86] [Dobson 2019, 6]).
David M. Berry has argued for the strengthening of a programme of criticism around the computational [Berry 2011]. However, in the view of Rieder and Röhle, a concentration on “the digital” and the understanding of code with regard to the scrutiny of computational tools would be short-sighted [Rieder and Röhle 2017, 118]. Such a focus emphasizes programming as a required skillset but overlooks the concepts and knowledges that are mobilized in the use of these tools (cf. [Dobson 2019]. Likewise, Dennis Tenen explains that “[j]ust applying the tool or even ‘learning to code’ alone was therefore insufficient for making sense of the results. What could help me, then, and what is only now beginning to surface in DH literature is a critical conversation about methodology” [Tenen 2016, 85]. In raising questions about computational tools and their embedded computational methods, these scholars share an interest in tool criticism discussions about digital methodologies in the humanities.
The use of tools and infrastructure in humanities research requires that these tools, and the researcher's relation to them, be made a site of critical analysis – which is to say, that they demand tool criticism. As we have defined it elsewhere, tool criticism concerns

the critical inquiry into digital tools and how they are used for various purposes within the research process. It reviews the qualities of the tool in light of, for instance, research activities, and it reflects on how the tool (e.g., its data source, working mechanisms, anticipated use, interface, and embedded assumptions) affects the user, the research process and output, and its reliance on the user's training.  [Van Es et al. 2021, 52].

Central to this notion is a critical and reflexive attitude towards the tools used to create knowledge. Inspired by reflection-in-action [Schön 1983], tool criticism involves continuous interaction rather than the exercise of detached judgement from a distance. It is important, then, to reflect on the choices made in using the tools. This also explains why critical making–inspired workshops are used in this paper to explore tool criticism as practice, in practice. Tool criticism helps bring the traditional critical and interpretive strengths of humanities scholarship back into focus within digital humanities scholarship (cf. [Dobson 2019]). It recognizes that tools are socio-technical constructions and are never simply a means of facilitating certain outcomes. They possess certain affordances and also reflect the worldviews of their makers.
As mentioned earlier, Drucker and Svensson have argued the importance of paying attention to how the material features of tools support knowledge production [Drucker and Svensson 2016]. This is equally relevant in relation to our digital or computational tools. In the 1980s and '90s the digital was seen as virtual and as existing outside material constraints. This popular discourse was misleading. Paul Dourish, in examining the material dimension of software and digital information, exhibits an interest in the materialities of information – “those properties of representations and formats that constrain, enable, limit, and shape the ways in which those representations can be created, transmitted, stored, manipulated, and put to use – properties like their heft, size, fragility, and transparency” [Dourish 2017, 6]. In addition, software is always “in-material” due to its being embedded in physical data carriers [Van den Boomen et al. 2009, 9]. Importantly, the material properties of tools and infrastructures exert influence on what, and how, they enable us to know.
One term used to capture the complexity of tools and their function with respect to a more encompassing methodology is “the stack.” It is defined as “the interlinked and dependent layers of tools and abstractions that make up any contemporary computational application or procedure” [Dobson 2019, x] . Reflecting on the Natural Language Toolkit, Tenen explains, “Each level of abstraction in the movement from statistical methods, to Python code, to graphical user interface introduces its own set of assumptions, compromises, and complications” [Tenen 2016, 85–86]. In addition to computational tools, digital infrastructures are used in research. These are intricate networks of relations that come into being as a system. Linking these issues back to the problem of instrumentalism, the conception of these tools as transparent is misguided because tools reflect methodological issues and because they have material properties with epistemological implications.

Critical making–inspired workshops

The two workshops discussed in this paper seek to promote tool criticism thinking as a means of demonstrating the importance of “methodological awareness and self-critique” [Dobson 2019, 6] when humanities students and scholars engage in computational research. These workshops can be understood as experiments in what Matt Ratto has popularized as critical making and oscillates between building and reflection. In critical making workshops, participants build prototypes as a way of conceptual exploration. These prototypes “achieve value through the act of shared construction, joint conversation, and reflection” [Ratto 2011, 253]. This emphasis on the workshop process and the exchange that emerges rather than on a final product differentiates this sort of activity from “critical technical practice” [Agre 1997] and “critical design” [Dunne 2005].
Most of the external funding for DH projects has been directed toward tool-building rather than interpretive work [Dobson 2019, 15]. This crux of tension in the digital humanities between intellectual labour on the one hand and making and practice on the other has often been captured in the catchphrase “more hack, less yack”. The split was epitomized when Stephen Ramsay, in “Who's In and Who's Out”, provocatively stated that digital humanities scholars need to learn how to code and build things [Ramsay 2011]. The present paper's premise is that this opposition is misinformed and unproductive, as “the humanities is both/and” [Nowviskie 2016]. The critical making–informed workshops discussed here, in combining building and theorizing, equally reject such a binary. They underscore that the building of tools is/should be linked to theorisation. However, to explore these workshops here, is not to make an argument for the need to learn how to build things, but acknowledges that engaging in tool criticism requires a basic understanding of the biases and assumptions of the tools/infrastructure being used.
Concerns over instrumentalism noted have also sparked debates as to the particular skills and literacies needed in the digital humanities. Some thinkers have argued that digital humanities scholars need to learn how to code ([Rushkoff 2010] [Ramsay 2011] [Galloway 2016]), whereas others have warned that teaching students to program is time-consuming, detracts from the development of critical thinking [Fuchs 2017], and may foster a “false sense of mastery” over the technology [Chun 2013]. Importantly, Rieder and Röhle [Rieder and Röhle 2017] find that the teaching of programming proficiency does not mean that the concepts and techniques employed in digital tools are clarified. The use of Gephi, for instance, would require an understanding of graph theory or sociometry. As Rieder explains in his recent book,

But beyond attributions of sometimes very broad properties to “the digital” or, more recently, to “algorithms,” scholars in the humanities and social sciences still rarely venture more deeply into the intellectual and material domains of technicality that hardware and software developers invent and draw on to design the forms, functions and behaviors that constitute technical objects and infrastructures.  [Rieder 2020, 81–82].

Commenting on the idea of a critical code studies, Rieder questions whether this type of “meaningful and context-aware reading” of code would even be possible for humanities scholars [Rieder 2020, 98].
How, then, can humanities scholars who use computational tools and infrastructures critically engage with them? To answer this question the work of Ben Schmidt proves useful. He suggests that, in order to understand algorithms, it is important to grasp the transformations they bring about [Schmidt 2016]. To demonstrate the usefulness of his proposal, Schmidt discusses the debate between Annie Swafford and Matt Jockers over Jockers' Syuzhet package that explores plots through sentiment analysis. Schmidt explains, “[t]he default smoothing in the Syuzhet package assumes [...] the start of every book has an emotional valence that continues the trajectory of its final sentence”. This function might be useful for the study of sitcom episodes, which tend to be cyclical in nature, but he finds it is less suitable for novels. Essentially, Schmidt proposes an engagement with the basic logics and principles of algorithms and an assessment of their suitability for the research at hand.
Schmidt's suggestion is certainly more attainable for digital humanities scholars than the deep engagement with the intellectual and material domains of technicality proposed by Rieder. His approach facilitates critical evaluation of tools and infrastructures, yet it should not be interpreted as an excuse to remain entirely uneducated about the concepts and methods at stake (for a basic understanding is certainly needed!). Schmidt's argument, however, needs to be extended as a tool is more than an algorithm. In other words, digital humanists should reflect on how a computational tool, the entirety of its stack, performs transformations and impacts knowledge production which includes inquiry into the methods [Dobson 2019, 156 footnote 9]. It requires the posing of questions related to all the levels of abstraction. In terms of critical reflectivity, this is also what I had hoped to explore with the participants of the workshops. Prior to the workshops we introduced them to the notion of tool criticism, discussing its definition and purpose. During the workshops, when the participants tried to put it in practice, it became apparent what it entailed and demands of users and where its challenges lie.

Playable datasets

Stefan Werning, a colleague at Utrecht University, has been organizing playable data workshops for several years now (see [Werning 2020]). These sessions have focused on exploring small/mid-sized datasets through card games. Participants play with the mechanisms and parameters of a card game as a means to facilitate new ways of calculating, sorting and ranking the underlying data sets (i.e. expressed in numbers, colours, identities etc.). These mechanics operate similarly to a layout algorithm like ForceAtlas2 in Gephi in that they need to fit the structure of the data at hand, but also distinctly (re)frame the types of insights that may be derived from the dataset [Werning 2020].
The playable data workshops centred on game co-creation, which can be understood as a type of critical making. As Odendaal and Zavala (2018) explain, “a physical game can help players make sense of something abstract and hidden and that is consequently excluded from public discussion” [Odendaal and Zavala 2018, np]. Although existing perspectives on critical (board/card) game-making are focused on games regarded as products, Glas et al. developed a technique called “discursive game design” [Glas et al. 2020]. Here game co-creation becomes an ongoing critical conversation. Specifically, following Galey and Ruecker, the approach proposes a trajectory of iterations involving a prototype. Each iteration then developed represents a statement about the argument and a consideration of alternative paths.
The workshops drew on Nathan Altice's view of the playing card as a “platform” According to Altice, “cards are platforms too. Their ‘hardware’ supports particular styles, systems, and subjects of play while stymying others” [Altice 2014]. Altice goes on to explain how a game's design is influenced by the cards' five main characteristics or affordances: planar, uniform, ordinal, spatial and textural. Having two opposing sides allows for concealment and the uneven distribution of information among players; the game operates as a surface for images, text and art. The uniformity of the cards allows them to be stacked and reordered as a deck. This affordance introduces elements of chance and assures fairness. Moreover, cards are ordinal, allowing them to be counted, ranked, and sorted. They also occupy space, meaning that the cards' arrangement in a particular order can have significance. Also, cards are textural, designed to be handled (e.g., shuffling, dealing, cutting, etc.) and require proximity to one another. In short, Altice clarifies the ways that cards impose material and mechanical restraints. As Werning puts it, the playful datasets workshops treated games-as-tools. As such, observations about games can be extended to thinking about how computational tools are impacted by design choices.
In December 2019, within the context of our teaching the research master course “Data-driven tools and methods” for 15 students within Media and Culture Studies at the Faculty of Humanities, I collaborated with Werning on a Playable Datasets workshop. In the workshop we incorporated tool criticism, as a reflective perspective taken up by the participants. Specifically, students were instructed to explore the mechanical and material constraints of a game we provided them and to make a series of modifications to it. For the workshop Werning had developed the “App Publisher” game using a dataset containing metadata from almost 10,000 apps from the Google Play Store.[1] Using nanDECK, he had converted a sample dataset from the Google Play Store into customizable playing cards.[2] The game was about the political economy of app publishing. Following the discursive game design approach, the game served as a starting point for subsequent modifications initiated by the students. In other words, they were asked to argue with the values and assumptions of the prototype in front of them.


Three main observations can be made about the workshop that are relevant to the aims of this paper. First, participants realized that making a game prototype entails a whole chain of tools including Excel, nanDeck and Google Play API. They realized how each step in the process of making a prototype had involved the making of choices and the following of certain procedures. They had worked with Excel to “clean” the dataset[3] and with nanDECK to design the cards. They struggled with the limitations imposed by only having access to the data made available through the Google Play Store API service, which restricted the scope of potential games they could make. Students raised questions about the meaning of the data, which they determined was multivalent (e.g., they mean something different to Google than to the maker or players of the original prototype) and thus dependent on the position from which one asked such questions. The workshop also made explicit how data and tools were intertwined. The tools they worked with determined the data that was available to scrape (Google Play Store API) and how it could be manipulated (Excel and nanDECK). The game mechanisms, in turn, provided ways of exploring the dataset and influenced how the data were understood. In arguing with the game prototype the students experienced first-hand how “Specific data sets and algorithms, designed to work together, cannot be easily separated and appropriated for other ends” [Loukissas 2019, 104].
Second, the workshop participants struggled in having to make and to critically reflect as part of a single process. They started out rather upbeat, pinpointing which assumptions about the app marketplace were embedded in the original game prototype. Tasked with making their own prototype based on the existing dataset, however, they struggled. They needed to learn how to work with various tools, but this effort took too much time and hindered their ability to think about their process. Only afterwards, when preparing their presentation, did they start to grasp the implications of their decisions and how their options had been framed by their tools. The session's time constraints minimized critical work in favour of results. Asking and attending to questions about approach, methods, and goals would certainly have slowed the process down – but, we should stress, moving slowly doesn't have to be experienced as something negative. Berry and Fagerjord have suggested that critical work in digital humanities projects can function as a “productive slowdown” [Berry and Fagerjord 2017, 143]. The experienced challenge of combining critical reflection and building was, in the end, primarily a limitation of how the workshop had been designed. New iterations could consider more time and space for engagement.
One team presented a short meme-based video made by student Daniël Everts on trying to engage with and reflect on nanDECK with the help of the software's 169-page manual.[4] The video's humor centres on the protagonist scrolling endlessly through the manual (two days later, three weeks later) on their computer. Finally, they push a large Spok (sic!) button, whereupon the fictional Star Trek Vulcan enters the room and, contrary to his famous hyperrationalism, repeatedly smashes the computer with his fist. Frustrations over the steep learning curves of computational tools are ever more frequently encountered when we are teaching practical data skills in our programme. The limitations experienced here were more about the “remediation” of analogue cards. Students implicitly reflected on the affordances of cards, earlier discussed in relation to Altice, in giving form to the games they developed. Werning and I also discussed whether we should have given them paper, scissors, and pens to design their game. Now they worked within the parameters of what was possible in code, which the students needed to learn on the fly.
The frustration with the nanDECK manual points to the gap between transparency and explainability. In other words, making the workings of a tool transparent does not mean that the user understands how and why it works the way it does. Transparency is, for instance, often pursued by sharing the code of software. Again, many people may lack the knowledge to comprehend and interpret it. Explainability would entail that how the software tool works is made interpretable to the user, explained in such a manner that they grasp its underlying concepts and models. The workshop furthermore raised the question as to how much the students needed to understand of the tool and what needed to be addressed in their discussion of the prototype they had made. Rather organically, they let go of the idea that they had to understand all the minute details of the tools and began focusing more on “transformations” to address the values and assumptions of the game prototypes.
Lastly, the workshop's participants spent a lot of time fine-tuning their prototype with an eye to its usability, concerning themselves with matters such as the legibility of fonts and images on the cards. Here, they aligned with the approaches of the HCI community, which stress “ efficient completion of tasks” and are oriented towards transparency and clarity [Drucker 2013, par. 34]. This orientation produced the impression that the game provides a window onto the underlying data and presents certainty about the apps and their relations. Interestingly, the participants could have used ambiguity to reflect the complexity of the app marketplace. Their choices, by contrast, demonstrated the appeal and persuasiveness of using simple categories, measures, and representational forms. In short then, the students seemingly favoured “clean” and unambiguous data and interfaces. This observation points to the importance of consideration for one's own ontological and epistemological assumptions.

Bots and digital infrapuncture

To further explore tool criticism thinking, I also discuss insights prompted by another workshop. Cristina Cochior and Manetta Berends, both researchers/designers active in the Netherlands, designed and led the “bots and digital infrapuncture” online workshop hosted in June 2020 by Data School at Utrecht University.[5] It was attended by eight participants, primarily PhD candidates, most of which had some basic programming skills. The workshop was preceded by a lecture in which Deb Verhoeven discussed digital infrapuncture, a concept she initially introduced in an opening keynote for the 2016 Digital Humanities at Oxford Summer School (see [Verhoeven 2016]). Like Svensson, Verhoeven is also concerned with the fact that most research infrastructures in DH do not reflect humanistic goals. She proposed “digital infrapuncture” as a new model for digital humanities infrastructure. The concept draws from the work of Manuel de Sola-Morales on urban acupuncture and Steven Jackson's (2014) essay “Rethinking Repair”. As Verhoeven explained, it combines the word infrastructure and acupuncture and is a way of exploring how small-scall infrastructure interventions can transform larger contexts. Rather than building new infrastructures, digital infrapuncture looks to relieve systemic pressure, which she termed “hurt,” through these interventions. Verhoeven's work responds to scholarly conversations on the need to reconsider big data humanities infrastructure in terms of capacity and care cf. [Nowviskie 2015]. In the workshop presentation she proposed agency, impact, and power as key conceptual axes on which current research infrastructures can be rethought and rebuilt. These conceptual axes were considered and discussed during the group discussion when identifying and tackling identified “hurt.”
Verhoeven's introduction to digital infrapuncture was followed by a short lecture by Cochior, in which she explored “Bot Logic” as a framework through which to understand bots' impact (their affective and effective forces alike). The workshop's aim was to devise bots in chat protocols that would intervene in the logics of the platform, creating small interventions that overhaul the notion of structure itself and offer relief. The workshop consisted of five parts: (a) brainstorming chat protocols, (b) identifying where it hurts, (c) designing a bot to puncture and deflate stress, (d) scripting how the bot acts upon the infrastructure, and finally (e) group discussion. I briefly reflect on the process and on outcomes involving two teams as a means to fleshing out what tool criticism actually entails in practice.
The first team, a duo of which I was part, focused on critique of the neoliberal university on Twitter in the Netherlands. We considered the Matthew effect of accumulated advantage as well as related questions of visibility and voice in academia. We thought about ways that bots could be used to amplify academics with few Twitter followers, but settled on another hurt, namely that caused by the hierarchies of visibility for political inaction in the Netherlands on issues pertaining to the neoliberalization of universities. To amend this hurt, we proposed a bot that would tweet posts with hashtags such as #WOinActie (a community of employees and students in the Netherlands protesting the neoliberalization of the university) directed to the Minister of Education, Culture and Science. Another team picked the open-source decentralized social network Mastodon and contemplated ways of countering radical voices on the Mastodon timelines. They came up with a bot that supported the coverage of a range of perspectives on a topic. It would identify a post's topic and compare it to the entries in a categorized database, in order to then counter it with an article offering a different perspective on that topic.


The workshop prompted a series of questions relevant to tool criticism. These queries centred on issues concerning the distribution of agency among users, bots, and platforms/infrastructures, as well as with scale and interconnections relating to questions of impact. In terms of agency, we knew that a bot tweeting at the Minister of Education, Culture and Science would immediately be muted or blocked, which got us thinking about whether we could actually intervene on the platform. Was this possible, or are bots merely a means to augment? And if so, should augmentation be seen as a form of intervention? It seemed that what the bot would be doing was more akin to what Michel de Certeau calls tactics. As he explains, “[t]he space of a tactic is the space of the other. Thus it must play on and with a terrain imposed on it and organized by the law of a foreign power” [De Certeau 1984, 37]. In other words, the bot would simply be acting within the environment as defined by the platform's own given strategies. In fact, Andreas Hepp proposes that bots are often “media in media” [Hepp 2020, 10]. Based on platforms such as Facebook and Twitter, they simply “act upon” these platforms. In other words, they intervene on rather than in the infrastructure. The latter possibility would require access – signalling uneven power relations - which we would never have. Questions were thus raised about the platform's ownership and how certain affordances reflected its purpose and the level of involvement of the audience.
The Mastodon team, with their bot, wanted to stimulate impartial coverage of news topics. They were critical of their own proposal to do so but formulated valuable reflections on scale and interventions. Plantin et al. argue that digital technologies have contributed to a “platformization” of infrastructures and an “infrastructuralization” of platforms. As they explain, platforms and infrastructures are both structures that underlie and support a system or organization, “but they differ in scale and scope” [Plantin et al. 2018, 299] They characterize the former with terms such as “centrally designed and controlled”, “modular frameworks”, and “small scales and scopes”; the latter are described by terms and phrases like “widely accessible”, “heterogeneous systems and networks”, and “essential services” [Plantin et al. 2018]. Van Dijck et al. explain that in North America and Europe, Alphabet-Google, Facebook, Microsoft, Amazon, and Apple provide infrastructural services on which many other apps and platforms are built. The services of these tech companies are integrated into many websites, enabling the collection of user data across the Web and various apps [Van Dijck et al. 2018, 15]. Such complex interdependencies are also present in the digital infrastructures used for research. These relations, and the implications of their dependencies, need to be considered in the analysis of how research infrastructure shapes the research process. They prompt important questions about agency, impact and power.
In addition to scale, the Mastodon team reflected with the workshop participants on the impact of the bot's interventions. We realized that the proposed bot might actually thwart the ideals of impartial coverage by giving attention to positions not supported by credible evidence. Regardless of whether or not the bot could work, it was interesting to think through the larger impact it might exert. Specifically, we discussed what had happened when representatives of social media platforms fact-checked against Wikipedia. In these cases, the hurt (i.e., the dissemination of fake news) had simply been displaced and travelled elsewhere: users started falsifying Wikipedia entries to prevent posts from being classified as fake. This phenomenon reminded us that systems are often interrelated and that interventions in one place might extend further than anticipated. Similar to our thinking about a Twitter neoliberal academia bot, the question arose as to where “the” critical location of intervention is situated and whether we had the requisite access to intervene in such a space.


In this paper, two critical making–inspired workshops were used as a springboard to unpack tool criticism and explore the types of reflections it brings to the fore. Our tools and infrastructures, it was underscored, require critical attention as they are not neutral actants in knowledge production. The workshops helped make explicit the thinking that tool criticism entails and the challenges that digital humanities scholars should confront when using tools and infrastructures. The critical modality in which such scholars have traditionally been trained should be directed towards their computational tools and infrastructures in the digital humanities.
The workshops underscored the complex network of relations between human and non-human actors that come into being and enact agency. Understanding tools is also a question of grasping their relation to users, and it requires the ability to treat them not as objects but as “a set of conditions, structured relations, that allow certain behaviours, actions, readings, events to occur” [Drucker 2013, par. 31]. This idea was the conception at the centre of the workshops. Tool criticism is a continuous process of thinking and acting with tools. From earlier workshops we had learned that teaching participants the basics of programming would require a lot more time. In the bots and infrapuncture this enactment had therefore been substituted with a scripting exercise. While this was obviously not the same activity, participants were nonetheless asked to “run their code” and consider how the bot would work in practice. During the playable datasets workshop in particular we noticed that participants struggled to move between making the game prototype and engaging in critical reflection on their process. Although critical work takes time, there should nonetheless be reflection on one's approach, methods, and the goals of the tools being used. This insight supports Berry and Fagerford's call for a productive slowdown in research projects.
Moreover, the workshops brought to light that we often use not only a tool but rather a chain of tools. This observation underscores Dobson's argument that criticism needs to be implemented throughout the entirety of the research process (not just in relation to its results). Alternatively, researchers may use digital infrastructure that relies on or is interwoven with other tools or infrastructures. These interact with and co-define the inputs and outputs. It also became apparent that a focus on the transformations brought about by tools – rather than getting lost in the details about how they function – was sufficient for conversations to ensue about their assumptions and values. However, here the participants of both workshops were somewhat at an advantage in their already having affinity with the Humanities.
As seen in the workshops, tool criticism thinking raises questions about impact, access and establishes relations of power and links to ownership. The tools we use are often appropriated from other disciplines or taken from other institutional and commercial contexts. More specifically, tool criticism prompts a series of questions. Who made it? For who (not)? With what purpose? Such questions are not unfamiliar as they are, for instance, grounded in traditional source criticism [Koolen et al. 2019]. However, enthusiasm for “big data” research in the humanities, necessitates these are revisited and updated. The workshops surfaced questions that engage explicitly with the materiality of the tools that play a role in transforming the data we work with. These include: What assumptions and values are established through its affordances and implemented in the design? Can we examine and adapt the underlying code? What is the impact of different settings and parameters (materials and mechanics). What happens when we use this set of rules instead of some other? How would certain decisions change how we interpret the underlying data? Here the relation between tool and research also proved of importance: What are my own assumptions and values?
What complicates the aim of tool criticism in answering these questions is that these computational tools are often easy-to-use and thus seem transparent and neutral. Furthermore, tools are complex and dynamic in that they not only consist of interlinked and dependent layers but are also networked with other human and non-human actors (including underlying data sets). Criticism is contingent not only on questions of access to the “black boxes” (e.g., code and algorithms, models and strategies) but also on the subsequent ability to make sense of it. The nanDECK tutorial was a powerful reminder that transparency is not enough for users to understand the stakes - its biases and assumptions - of using the tool. It demands that users of tools become more informed and literate about the building blocks of these technologies.
While our initial definition of tool criticism ([Van Es et al. 2018] [Van Es et al. 2021]) addressed the fact that our computational tools contain values and assumptions pertaining embedded concepts and methods that should be subject to scrutiny, we failed to extend our inquiry to include concerns related to capacity and care. Here, books such as Data Feminism (2020) by Catherine D'Ignazio and Lauren F. Klein, with its seven principles of data feminism [D’Ignazio and Klein 2020], and All Data Are Local by Yanni Alexander Loukissas [Loukissas 2019] might prove useful. Concerns for data justice as well have been expressed in relation to automated tools by Virginia Eubanks in Automating Inequality (2019) [Eubanks 2019] and Safiya Noble in Algorithms of Oppression (2018) [Noble 2018]. By incorporating such concerns into the questions we raise about our research and the tools and infrastructures we employ, tool criticism becomes an important component of digital humanities scholarship.
Engaging in tool criticism is, as the participants of these critical making-inspired workshops experienced, no simple task. It needs to be part and parcel of the research process, necessitates a slowing down and an eagerness to learn about the basic principles of the tool. For humanities scholars in particular, this sort of criticism requires discussions not just about methodology but also about ethics and care. New research tools and infrastructures for the humanities should consider questions of access and agency in local contexts. Workshops of the sort discussed here are a modest step to help raise awareness of the fact that computational tools are not neutral and that a lot of interpretive acts are involved in working with them. Moreover, the insights and reflections generated in these workshops can help formulate the type of questions that we need to be asking about computational tools in digital humanities research and beyond and surface issues that we need to collectively address.


The author would like to thank Stefan Werning, Cristina Cochior, Manetta Berends and Deb Verhoeven for their co-orgnanization/participation in the workshops and the fruitful exchange this allowed for on tool criticism.


[1] It was based on a dataset scraped and shared via Kaggle by Lavanya Gupta.

[3] In the course they had read the article by [Rawson and Muñoz 2019] about how the notion of ‘cleaning’ proposes an underlying correct order of data. As such, they were primed to think also about the impact of this on knowledge production.

[4] The video was later posted to YouTube on January 7, 2020: https://www.youtube.com/watch?v=dGKdNYSjTHk

[5] This workshop has since been developed as an online module to stimulate tool criticism thinking with short video contributions from Deb Verhoeven, Seda Gürses and Andreas Hepp: https://bots-as-digital-infrapunctures.dataschool.nl/

Works Cited

Agre 1997 Agre, P. E. (1997) “Toward a Critical Technical Practice.” In G. Bowker, L. Gasser, L. Star, and B. Turner (eds), Bridging the Great Divide: Social Science, Technical Systems, and Cooperative Work, Erlbaum, Mahwah, pp. 131-58.
Allington et al. 2016 Allington, D. S. Brouillette, S., and Golumbia, D. (2016) “Neoliberal Tools (and Archives): A Political History of Digital Humanities.” Los Angeles Review of Books. Available at: https://lareviewofbooks.org/article/neoliberal-tools-archives-political-history-digital-humanities/
Altice 2014 Altice, N. (2014) “The Playing Card Platform”, Analog Game Studies. Available at: http://analoggamestudies.org/2014/11/the-playing-card-platform/
Baird 2004 Baird, D. (2004) Thing Knowledge: A Philosophy of Scientific Instruments. University of California Press, Berkeley.
Berry 2011 Berry, D. (2011) “The Computational Turn: Thinking About the Digital Humanities”, Culture Machine, 12: 1-22.
Berry and Fagerjord 2017 Berry, D. and Fagerjord, A. (2017) Digital Humanities. Polity Press, Cambridge.
Bod 2013 Bod, Rens. (2013) A New History of the Humanities: The Search for Principles and Patters from Antiquity to the Present. Oxford University Press, Oxford.
Caplan 2016 Caplan, L. (2016) “Method without Methodology: Data and the Digital Humanities.” E-Flux, 72. Available at: https://www.e-flux.com/journal/72/60492/method-without-methodology-data-and-the-digital-humanities/https://www.e-flux.com/journal/72/60492/method-without-methodology-data-and-the-digital-humanities/
Chun 2013 Chun, W. (2013) “Wendy Hui Kyong Chun in Conversation with Adeline Koh”, E-Media Studies, 3.1. Available at: https://journals.dartmouth.edu/cgibin/WebObjects/Journals.woa/xmlpage/4/article/428
De Certeau 1984 De Certeau, M. (1984) The Practice of Everyday Life. Translated by Steven Rendall. University of California Press, Berkeley and Los Angeles.
Dobson 2019 Dobson, J. E. (2019) Critical Digital Humanities: The Search for a Methodology. Champaign: University of Illinois.
Dourish 2017 Dourish, P. (2017) The Stuff of Bits: An Essay on the Materialities of Information. The MIT Press, Cambridge, MA.
Drucker 2012 Drucker, J. (2012) “Humanistic Theory and Digital Scholarship.” In M. K. Gold (ed.), Debates in the Digital Humanities, University of Minnesota Press, Minneapolis, pp. 85–95.
Drucker 2013 Drucker, J. (2013) “Performative Materiality and Theoretical Approaches to Interface”, Digital Humanities Quarterly, 7.1. Available at: http://www.digitalhumanities.org/dhq/vol/7/1/000143/000143.html
Drucker and Svensson 2016 Drucker J. and P. Svensson. (2016) “The Why and How of Middleware”, Digital Humanities Quarterly 10.2. Available at: http://www.digitalhumanities.org/dhq/vol/10/2/000248/000248.html
Dunne 2005 Dunne, A. (2005) Hertzian Tales: Electronic Products, Aesthetic Experience, and Critical Design. The MIT Press, Cambridge, MA.
D’Ignazio and Klein 2020 D’Ignazio, C. and L. F. Klein. (2020) Data Feminism. The MIT Press, Cambridge, MA.
Eubanks 2019 Eubanks, V. (2019) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press, New York.
Fallman 2007 Fallman, D. (2007) “Why Research-oriented Design Isn't Design-oriented Research: On the Tensions Between Design and Research in an Implicit Design Discipline”, Knowledge, Technology and Policy, 20.3: 193–200.
Fuchs 2017 Fuchs, C. (2017) “From Digital Positivism and Administrative Big Data Analytics Towards Critical Digital and Social Media Research”, European Journal of Communication, 32.1: 37–49.
Galey and Ruecker 2010 Galey, A. and S. Ruecker. (2010) “How a Prototype Argues”, Literary and Linguistic Computing, 25.4: 405–24.
Galloway 2016 Galloway, A. (2016) “The Digital in the Humanities: An Interview with Alexander Galloway. (Alissa Dinsman)”, Los Angeles Review of Books. Available at. https://lareviewofbooks.org/article/the-digital-in-the-humanities-an-interview-with-alexander-galloway/
Glas et al. 2020 Glas, R., van Vught, J.F., and Werning, S. (2020) “‘Thinking Through’ Games in the Classroom: Using Discursive Game Design to Play and Engage with Historical Datasets”, Transactions of the Digital Games Research Association, 5.1.
Hepp 2020 Hepp, A. (2020) “Artificial Companions, Social Bots and Work Bots: Communicative Robots as Research Objects of Media and Communication Studies”, Media, Culture and Society: 1-17.
Jackson 2014 Jackson, S. (2014) “Rethinking Repair” In T. Gillespie, P. Boczkowski and K. Foot (eds.), Media Technologies: Essays on Communication, Materiality and Society, The MIT Press, Cambridge, MA, pp. 221-40.
Jacomy 2020 Jacomy, M. (2020) “Digital Criticism: In Favor of the Scientific Instrument”, Reticular Hypotheses. Available at: https://reticular.hypotheses.org/1692
Koolen et al. 2019 Koolen, M., Van Gorp, J., and van Ossenbruggen, J. (2019) “Toward a Model for Digital Tool Criticism: Reflection as Integrative Practice”, Digital Scholarship in the Humanities, 34.2: 368–85.
Latour and Woolgar 1986 Latour, B. and S Woolgar. (1986) Laboratory Life: The Construction of Scientific Facts (2nd edition). Princenton University Press, Princeton.
Loukissas 2019 Loukissas, Y.A. (2019) All Data Are Local: Thinking Critically in a Data-Driven Society. The MIT Press, Cambridge.
Masson 2017 Masson, E. (2017) “Humanistic Data Research. An Encounter Between Epistemic Traditions.” In MT. Schäfer and K. van Es (eds), The Datafied Society. Studying Culture through Data, Amsterdam University Press, Amsterdam, pp. 25-38.
Noble 2018 Noble, S. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, New York.
Nowviskie 2015 Nowvisikie, B. (2015) “On Capacity and Care”, Author’s blog. Available at: http://nowviskie.org/2015/on-capacity-and-care/
Nowviskie 2016 Nowvisikie, B. (2016) “On the Origin of ‘Hack’ and ‘Yack.’” In M. K. Gold and L. F. Klein (eds), Debates in the Digital Humanities, University of Minnesota Press, Minneapolis, pp. 66-70.
Odendaal and Zavala 2018 Odendaal, A., and Zavala, K. (2018) “Black Boxes out of Cardboard: Algorithmic Literacy through Critical Board Game Design”, Analog Game Studies. Available at: http://analoggamestudies.org/2018/12/black-boxes-out-of-cardboard-algorithmic-literacy-through-critical-board-game-design/
Parsons 2015 Parsons, M.A. (2015) “e-Infrastructures and RDA for data intensive science”, Research Data Alliance. Available at: https://rd-alliance.org/sites/default/files/attachment/Infrastructures,%20relationship,%20trust%20and%20RDA_MarkParsons.pdf
Plantin et al. 2018 Plantin, J. C., Lagoze, C. Edwards, P. N., and Sandvig, C. (2018) “Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook”, New Media and Society, 20.1: 293-310.
Ramsay 2011 Ramsay, S. (2011) “Who’s In and Who’s Out”, Author’s blog. Available at: http://stephenramsay.us/text/2011/01/08/whos-in-and-whos-out
Ramsay and Rockwell 2012 Ramsay, S. and Rockwell, G. (2012) “Developing Things: Notes toward an Epistemology of Building in the Digital Humanities.” In M. Gold (ed.), Debates in the Digital Humanities, University of Minnesota Press, Minneapolis, pp 75-84.
Ratto 2011 Ratto, M. (2011) “Critical Making: Conceptual and Material Studies in Technology and Social Life”, The Information Society, 27.4: 252-60.
Rawson and Muñoz 2019 Rawson, K. and Muñoz, T. (2019) “Against Cleaning.” In M. K. Gold and L. F. Klein (eds), Debates in the Digital Humanities, University of Minnesota Press, Minneapolis, pp. 279–92.
Rieder 2020 Rieder, B. (2020) Engines of Order: A Mechanology of Algorithmic Techniques. Amsterdam University Press, Amsterdam.
Rieder and Röhle 2017 Rieder, B and T. Röhle. (2017) “Digital Methods: From Challenges to Bildung.” In M.T. Schäfer and K. van Es (eds.), The Datafied Society: Studying Culture through Data, Amsterdam University Press, Amsterdam , pp.109–24.
Rushkoff 2010 Rushkoff, D. (2010) Program or Be Programmed: Ten Commands for a Digital Age. OR Books, New York.
Schmidt 2016 Schmidt, B. (2016) “Do Digital Humanists Need to Understand Algorithms?” In M.K. Gold and L.F. Klein (eds), Debates in the Digital Humanities, University of Minnesota Press, Minneapolis, pp. 546-555.
Schäfer 2011 Schäfer, M. T. (2011) Bastard Culture!: How User Participation Transforms Cultural Production. Amsterdam University Press, Amsterdam.
Schön 1983 Schön, D. (1983) The Reflective Practitioner: How Professionals Think in Action. Basic Books, New York.
Svensson 2015 Svensson, P. (2015) “The Humanistiscope – Exploring the Situatedness of Humanities Infrastructure.” In P. Svensson and D.T. Goldberg (eds), Between Humanities and the Digital, The MIT Press, Cambridge, MA, pp. 337-54.
Tenen 2016 Tenen, D. (2016) “Blunt Instrumentalism” In M. K. Gold and L. F. Klein (eds), Debates in the Digital Humanities, University of Minnesota Press, Minneapolis, pp. 83-91.
Van Dijck et al. 2018 Van Dijck, J., Poell, T., and de Waal, M. (2018) The Platform Society: Public Values in a Connective World. Oxford University Press, Oxford.
Van Es et al. 2018 Van Es, K., Wieringa, M. and Schäfer, M. T. (2018) “Tool Criticism: From Digital Methods to Digital Methodology” ACM WS.2 2018: Proceedings of the 2nd International Conference on Web Studies, Paris, France, October 2018.
Van Es et al. 2021 Van Es, K., Schäfer, M. T., and Wieringa, M. (2021) “Tool Criticism and the Computational Turn: A ‘Methodological Moment’ in Media and Communication Studies” MandK Medien and Kommunikationswissenschaft 69 (1): 46-64.
Van Geenen 2020 Van Geenen, D. (2020) “Critical Affordance Analysis for Digital Methods: The Case of Gephi.” In M. Burkhardt, M. Shnayien and K. Grashöfer (eds.), Explorations in Digital Cultures, Meson press, Lüneburg, pp. 1–21.
Van den Boomen et al. 2009 Van den Boomen, M., Lammes, S., Lehmann, A., Schäfer, M. T., and Raessens, J. (2009) “Introduction: From the Virtual to Matters of Fact and Concern.” In M. van den Boomen, S. Lammes, A. Lehmann, M. T. Schäfer and J. Raessens (eds.), Digital Material: Tracing New Media in Everyday Life and Technology, Amsterdam University Press, Amsterdam, pp. 7-17.
Verhoeven 2016 Verhoeven, D. (2016) “Opening Keynote: Identifying the Point of It All: Towards a Model of ‘Digital Infrapuncture’, Digital Humanities at Oxford Summer School”. Available at: http://podcasts.ox.ac.uk/opening-keynote-identifying-point-it-all-towards-model-digital-infrapuncture
Werning 2020 Werning, S. (2020) “Making Data Playable: A Game Co-creation Method to Promote Creative Data Literacy”, Journal of Media Literacy Education, 12.3: 88-101.
2023 17.2  |  XMLPDFPrint