DHQ: Digital Humanities Quarterly
2009
Volume 3 Number 4
2009 3.4  |  XML |  Discuss ( Comments )

The “e” Prefix: e-Science, e-Art & the New Creativity

Gregory Sporton  <gregory_dot_sporton_at_bcu_dot_ac_dot_uk>, Director, Visualisation Research Unit, School of Art, Birmingham City University

Abstract

What does it mean to put an “e” ahead of a concept? This essay discusses the purpose of doing such a thing, arguing there is a distinct method in the apparent randomness of labelling something “e” this or that. Far from simply denoting that it might be done with computers (and, indeed, what isn't today), Sporton argues that beyond the effect of explaining this is something to do with technology, there is an emergent “e-culture” that reunites the arts and sciences after two hundred years of separate development within the academy. An “e-Culture” emerges that reflects the values, opportunities and restrictions of Internet as a research environment. The potential of that environment requires a mindset focussed on collaboration to achieve anything of creative significance.

Whatever variation of the organising principles of a university that an institution likes to invoke, the distinction between arts and humanities, on the one hand, and the sciences, on the other, is one that most people who work in the university sector anywhere in the world would recognise. It is based in assumptions that date back to the Industrial Revolution and the Enlightenment, defining moments for human enquiry in the respective areas. What is interesting here is not just the division of labour between the scientific, which self-assumes to proceed along lines of rationality, and the artistic, with its aspiration to spark the imagination through creativity, but the entrenched values system that reinforces and promotes these stereotypes whether they are accurate or not. There can surely be few people left who do not recognise the creativity of science, those moments of intuition that set aside years of training in scientific method, nor, as I can attest from first hand experience, the rationality of progression in the arts that requires intellectual discipline in order to get on with the creating. The division of resources is also a matter for observation, with the assumption that without expensive kit, scientists are unable to get much done, and that the stuff of the arts is human capital, and therefore less demanding on plant and equipment. Even teleological assumptions about the purpose of enquiry seem difficult now to fit into these traditional paradigms. We have a Western economy more dependent than ever on creativity in the service sector, existing alongside disproportionately large investment in scientific research infrastructure whose utility no one can predict.
This essay is about how digital technology and the network environment have changed long standing arrangements for subject divisions and resource provision across the research sector through the development of a common knowledge and resource base. The common denominator in this transformation I will refer to as the “e-prefix,” a mechanism whose function mitigates opposition to its activities whilst at the same time it builds an alternative system that leaves behind the binary subject distinctions of the 20th century. As the craft practices of the arts and sciences proceed in their own directions, the “e” component of both have created communities of mutual interest that suspect they hold the key to the future development of the worlds in which they trained. Their shared interest in exploring the possibilities of new technologies ahead of the crystalisation of e-practices provides an historic moment where the categories and values that have separated science and art come under examination. This essay intends to address some of the issues that arise when traditions two hundred years old come under pressure, and to survey the possibilities of reframing the way we see our culture of learning, research and practice.
The task of early-adopters and innovators of what I will refer to as “the e-Culture” is to imbue the e-prefix with meaning and application whilst seizing the opportunity to use it to transform their disciplines. This, it will be suggested, will not be done without significant resistance from those who would prefer the current division of labour that dichotomises the arts and the sciences. This is not simply a matter of making an artistic claim on the resources of the sciences; it is also to identify how the locus of creativity has relocated under these new conditions and how that may impact on the culture of the arts in particular. This goes some way to explaining the importance of the e-prefix, a handy signifier for something that happens in “the e-Culture,” yet thought of cynically by most researchers as a means of organising and dispersing funding. The e-prefix and the “e-culture” it can support are capable of assisting the creation of new working practices and new forms of understanding across what we used to think of as specific domains. It seems hardly surprising that the transformative technology of our times will also transform the research sector, but the extent of this is as yet unknown, and dependent on the development of successful models of research practice and demonstrations of the evidence of their value.
Parodic as it may be, academic disciplines have cultures, expectations and craft practices that are specific properties of their shared ways of seeing the world. As [Kuhn 1962] points out, science is not always the progressive rational search for truth it often assumes itself to be, nor, I might add, are the arts manifestations of romantic expressionism. Self-criticism and reflexivity help retain the human dimension to the enterprise. But as [Hargreaves 1982] suggests, one of the most powerful features of the organisation of the curriculum is the tendency to create very strong subject loyalties that in turn reinforce existing subject cultures. The ability to replicate the cultural models of the subject domain is the means of ensuring progression through its political and social levels. In the case of emerging transformative practices, adopters can expect significant resistance where their practices target or erode long-standing stabilising principles. Other forces in play determine the potential effectiveness of attempts to reorganise a subject and its resource base. At this point, it is worth identifying a starting point for this process, and the creation of the much derided “e-Science” is a useful place to begin.
In 1999, Professor John Taylor, the then Director General of the United Kingdom's Office of Science and Technology, invented the term to describe the development of computationally intensive science that is carried out in highly distributed network environments, or science that uses immense data sets that require grid computing. Essentially, this move was to create a category for an enabling infrastructure used to support scientific enquiry and would evolve to include the research that required it. Conventionally, the term e-Science is also routinely applied to models that distribute computing power or enable other collaborative or shared resources across a network, thus simultaneously incorporating less CPU-intensive collaborations between people but still requiring the resources of the network to make them operable. They thus imitate or incorporate the development of Web 2.0-style applications as well as multiple user video conferencing systems like AccessGrid. However, by the time we at the Visualisation Research Unit in Birmingham began our enquiries into e-Science, significant doses of cynicism had also begun to be palpable amongst by e-Scientists. As described in our research report on “Building the Wireframe” [Sporton 2007], e-Science was seen by colleagues in computer science as chiefly a means of glossing funding proposals.
Of course, Taylor's formulation was never as innocent as the recognition and invention of a new brand of computer science. Clearly, it was designed to describe the difference in methodology between existing science practices and those fuelled by the high-octane power of grid computing, with its capacity to deal with hitherto unknown numbers of objects and subjects in a wide spectrum of scientific enquiry. It seems fairly clear that the creation of the term was a means of encouraging computer scientists into projects that were applied computing rather than theoretical speculations about the power of linked computers, and to justify their funding with some real world outcomes. This had the added advantage of drawing non-IT specialist scientists into the e-Science mix. Given the suggestions here about the implications for cultural models of practices in subject domains, the term e-Science did another more important thing that would be copied by e-Research, e-Learning and even a nascent e-Art. Taylor's synthetic confection created a distinction between what we might refer to as the existing “craft” of research and practice in specific areas of science, and the development of an infrastructure to support it. The result of this was to mitigate the level of hostility towards methodologies and research outputs driven by computing infrastructure and the existing established methods of dealing with data in the sciences. It had the additional value of ring-fencing large amounts of funding specifically for the development of collaborative work between keen adopters of the technology in traditional science realms and computer scientists interested to see how the technologies might manifest itself in specific projects. As such, in a stroke it enabled the resources and opportunities to come together without arousing too much initial suspicion, probably accounting for the cynicism of the computer scientists we would encounter six years later. Taylor had, in effect, stolen their clothes (and a good slice of their funding) and retailored them to fit broader research needs. By applying the term flexibly and avoiding direct threats to prevailing subject cultures, Taylor secured the resources and environment to encourage widespread application of grid computing and network infrastructure to a justifiably broad range of problems in the sciences.
This issue of organising the introduction of network computing into hitherto low-tech areas corresponds to the implementation patterns of areas of the arts as they grapple with the attack on traditional craft practices that technologists and their computers are suspected of planning. Unlike the sciences, methodologies for some forms of creative practice have remained unchanged for centuries, despite the changing technological environments around them (painting or literature come to mind most immediately). Indeed, in some areas of the arts, the very human-intensive nature of practice is seen as a bulwark against the perceived loss of our humanity through the use of technology. In the same way that William Morris and John Ruskin responded to the mass manufacturing of the industrial era by reviving crafts like weaving, so have photographers, to use but one example, rediscovered wet processing as a means to argue up their claims to specialness.
In relation to the principles and practices of the traditional subject domain, e-Science approaches in the arts encounter different conditions and challenges. The first is the preoccupation in the arts with the work of the individual, with a continuing focus on personal statement as a precondition of the validity of artistic work. However illusory this is in modern art-making, it remains a prevailing paradigm in the minds of artists and the public, and scarcely the right sort of landscape for the collaborative resources sharing in which e-Science specialises. The introduction of computing into artistic processes has generally followed this model of personal authorship by a specific artist in the adoption of technologies, though the implementation of certain types of technology has invariably taken over specific industries. Photoshop revolutionised photography, as did Final Cut Pro for film editing, for largely the same reasons. Rather than creating new processes or forms, the software (when combined with desktop-sized processing power) supported and replicated an existing workflow. The bin icons provided for the Final Cut Pro user were once actual bins in an editing room, though only filmakers over forty can now remember them. The next generation will know them only as metaphors. In the case of movie making, the changes in technology have rapidly reduced the costs and complexities of practice, putting the capacity for high quality broadcast output into the hands of anyone who can afford a three-chip camera, a Mac and a copy of Final Cut Pro.
In partnership with increasingly sophisticated digital stills cameras, the iconic software that is Photoshop, with its masses of filters and functions, speeds up the old craft process that took place in the darkroom. The move from wet to dry processing has squeezed many of the old photo processing firms, and surprised film manufacturers with the swift replacement of their film stock with SD cards. Despite the romantic nostalgics claiming that “something” is forever lost in the move to dry processing, photographers and the public at large have been happy to dump the tedious delays of the past. The attraction of cheap and instant image manipulation provided when the computer becomes first the workplace for the art of photography and more latterly the gallery for it, as the means of sharing and presenting the photographs previously kept in the drawers, has become ineluctable. Notwithstanding the emergence of counter-revolutionary photography wedded to the chemicals, digital cameras have changed the simple act of taking photographs into one of creating images.
The Photoshop example is helpful for two reasons. The first is the reality that the technology involved really does do different things to the processes of photography. In fact, the process becomes more about the treatment of an image rather than the representation of something. In the forms of presentation and manipulation, photography is changed into image-making, leaving us less sure than ever about the “truth” of a photograph. The second observation is a direct result of the new process. Such is the flexibility and effectiveness of the new process that image making becomes more like authorship than ever before, with far more treatments that are far less time-consuming available to even the most average of photographers. In this case, the technology reinforces the existing practice of the arts, leaving the individual creator central to the creative and presentational process. By doing so, the technology, threatening as it is to an existing craft, extends the practice of art without destroying the premise of it. Indeed, whether it be photography or film making, the technological incursion into the arts begins by challenging what is already an expensive and technologically heavy practice. Assistive technologies appear in this model to have the most to offer as digitised analogues of existing technical processes. Creative practice as an extension of these technologically complex processes have seen fewer barriers because of the existing, highly technical and expensive practices, of which the digital versions looked initially like a harmless, natural extension. However, as we shall see, the real impact of these uses of the technology is only gradually becoming apparent.
Part of the problem with e-Science processes in art is linked to this perception about the nature of technology as an essentially corrupting force. Art, as a subject, encourages deep loyalties to craft processes that have reputedly ancient origins. This forms a link between contemporary practice of art and its distinguished history. However, as [Snow 1959] pointed out in his “Two Cultures” essay, the privileges of art are built on this strongly overvalued link to craft practice as a spiritual domain. In the agnostic 20th century, the transformative dimension of art and the place it takes in social life as a replacement for religion enhanced this mythology of art. Art and artists came to present their work as a transcendent practice, often with very little evidence to support such an idea. However, the raw, visceral encounter between artist and material is part of this mythology. Artists transform material through their physical actions, moving beyond the intellectual decision-making that appears to go with technology, thus imbuing the creative process with something essential that moves beyond the rational. It is not possible to speak of art as having a unifying idea, but it is possible to see that themes about art strongly determine and support an argument about its cultural importance. Whether this is through the enabling of individual expression, the sharing of feelings and experiences, or the link through art to a greater appreciation of the world around us, the art agenda tends to focus on this physical-making process without mediation, focussing on art work as an individual's creative proposition. This is a problematic way in which to posit art in the digital age, and discounts the creative possibilities of the transformative technology of the age. One of the ways to miss the potential for creative practice of technology is to employ it in the way described above. It seems clear from the example of Photoshop that digital technology can be useful to the creative process as a digitised analogue. What is less clear is whether the properties of the technology are being exploited to do more than replicate, albeit at astonishingly improved speeds, an existing process.
Of course, this presupposes assumptions about technology, too. As Foucault's use of the term in “Discipline and Punish” indicates, technology is a form that ideas take [Foucault 1979]. The capacity to formulate the intention initiates a technological process that results in a predicted or hoped-for way, and yet it also seems in the nature of digital technology to throw up a number of unplanned and unintended consequences for exploitation by creative practitioners. It seems unsurprising to progress from the mechanical to the digital camera when the possibility of machine-captured imagery is already current. More problematic is the search through the infrastructure of e-Science for creative possibilities, given the apparently strong framing of grid computing and networked technology towards hard science. For e-Art, this demands a creative response to the new environments created through e-Science, to posit questions about the features of the network and how it can be possible to interact creatively with it. Indeed, for some, the lack of an apparent craft supporting the creative work of an individual rules out the possibility that this type of technology can be creative or thought of as Art at all.
One of the potential solutions to this problem is to examine the nature of the platform that the network can present for artists. Looking beyond the initial instinct for creative practitioners to find more efficient ways to create their work, the Internet has also offered alternative routes to the public for artists without access to gallery spaces or cinemas. YouTube or Flickr, and similar style “Web 2.0” sites, have created places where once obscure work can be discovered by those with the curiosity to look. This is one of the strongest arguments for serious engagement of artists with the network, though it should be noted that this is rarely successful unless the artist understands the nature of the Internet as a social environment, and understands both the limitations and advantages of the technology. The limitations of the Internet as a delivery system are often presumed to be improving to the point where they might compete with existing external systems like the cinema or concert hall, and yet to look to these models is to mistake the nature of the environment.
Delivery across web-based systems is likely to come with losses of “quality” in terms of video compression, or latency in real time interactions. To cite these as deficiencies is to blind ourselves to some of the best creative features of the web, and to deny the Web 2.0 developers their due in developing systems where the data is available on call, and can be reformulated as fits the purpose of the either the user or producer. It is the interactivity of the web that offers some of its most tempting creative features, and where we can begin to think about art in a different way. The possibilities of reworking or mashing-up web-based resources already gives rise to significant practice: games of Photoshop Tennis, where successive contributors change and develop a starting image (and in doing so demonstrating their mastery of the software), or mashups using APIs that source unrelated material and present it in newly coherent form. The distribution function of the web, the simple version that allowed for access to original material, is changed into an editing and presentation opportunity by refitting the material in new forms that emphasise other features, like geographical location or subject matter. It is here that the creative features that are particular to the Web begin to manifest themselves, in the possibilities of formulating and reformulating material. The logical extension of this idea is the acceptance that works of art made for the Internet may have no actual final form, but can continue to mutate in any number of directions at the hands of any number of contributor/artists.
Superficially, this begins to look like a creative direction for all that network technology. Certainly the creative energies of the millions with broadband access suggest this is so. By creating small APIs to organise and reorganise the data, Web 2.0 applications provide a fluency and flexiblity with source material as yet unknown. This is a long way from Burrough's 60s cut-ups and fold-ins techniques as a means to displace deterministic consciousness from the creative process, a method to which the mash-up owes its origin. However, I want to suggest at this point that something else is missing in the artistic sphere that is computing. The relationship between e-Science and creativity cannot surely be limited to known techniques and outputs any more than building a synchrotron determines the results of the experiments undertaken with it. As networked technology takes on the role of the transformative technology of its time, replacing television and telecommunications in the process, the connection between the infrastructure and the possibilities for creative practice must surely run deeper. It remains something of a tenet in my own laboratory in the School of Art at Birmingham City University that creativity is not solely the domain of artists, and it seems clear that in an old-fashioned division of labour between the arts and the sciences, the arts have lost many creative minds to computing.
This shift in the locus of creativity is at least as important as the urge to be creative with computers, and offers some clues as to how the new art work of the networked future may emerge. The intensity of working with computers on complex tasks is, I would suggest, comparable to the depth of craft practice that is so highly prized in so much of the arts already. The dependence on technology and technologists for structures in which to work, and that typifies much contemporary creative practice with computers, appears to be the gambit of late-adopters of ICT technologies who suspect the digital process to contain a rich area of new practice. This is not to dismiss this work, but to acknowledge the stratification of labour that characterises much practice in this field. The establishment of multi-disciplinary groups becomes a necessity when no single party or sub-set can have acquired sufficient understanding to proceed on their own. The training of artists is nevertheless already undergoing a change not linked (and often not reflected) in the curricula of the institutions in which they study. The familiarity with computers and network technology, with which they have grown up, should give rise to a different kind of practice in the arts, more inter-disciplinary than multi-disciplinary in character, designed to do more than exploit the layer of surface technology. The future for e-Art is in creating through technology, accepting the technology to be an integrated part of the creative process. For many e-Scientists, this is a commonplace. For an e-Arts practice, a basis in computing will be the craft practice that supports the expressive one.
To arrive at this particular destination for e-Arts will require something of a change in mindset. This relates not only to the idea of the network as a creative environment, but to the cultural dispositions of those working in the Arts in terms of the role computers play. The first part of this agenda is a step change in the expectations for the Arts in the allocation of resources. As I suggested at the beginning of this paper, the current assumptions about the productivity and viability of determining resource needs in relation to subject areas becomes somewhat obsolete when it comes to the demands of digital technology. As researchers working in arts fields, we find ourselves with no less baffling and expensive challenges, but simply without the history or culture of being able to realistically demand and receive a share of the resources that might allow us to make our breakthroughs. Much of this is determined first by the current generation of managers who are used to research budgets buying out time rather than acquiring capital equipment that needs maintenance, and the concomitant problems this brings. The second is the impact in the training of artists and researchers. Whether they are intending an e-Art career or otherwise, it seems irresponsible to send them out into a networked world without significant experience of how they might practice their trade there. To this end it seems crucial that creative arts departments make serious and significant connections with computer science ones. The enriching of computing practice through an encounter with the methods of the arts may well spark off new areas for the application of the powerful technology we have at our disposal. By emphasising experimentation, arts practitioners can introduce the imaginative “play” component that all serious enterprises require. This would provide a staging post for the disruptive, difficult and painful task of reorganising how we think, act and create in a world of digital networked technology, extending the issues downwards into the undergraduate populations and upwards in the direction of the Research Councils. Like art, technology is also a means of transformation. Their integration in the context of research and practice will demand significant reorganisation of our departments and our outlook about the role of computers. Following the effect of Taylor's original formulation of “e-Science” as a means of mitigating protest about confusing science with computers, “e-Art” needs to establish a foothold in the curriculum before it is realised that it has become an integrated part of it.
Emerging as a result of communities of interests drawing together, the “e-Culture” has some striking features. Chief of these is the registering of common interest in relation to the properties of technology to transform our social experiences. The dichotomising of traditional arts and sciences has served to create barriers between these groups that the technology is breaking down. With this breakdown will surely come realignments into interdisciplinary practice that respects the creativity and craft experience of all parties. At its heart is a mutual understanding of technology and of the increases in processing power, distribution of resources or accessibility across a network that can transform the relationship with what we might call an audience, who themselves may be happily engaged in the making process, or appropriating its results. This distribution of creativity changes the methods of interaction and the results of the collaborative process, producing a very different and challenging artistic environment than the traditional proposition of the lone artist asserting his genius.
For the art world, limping on since post-modernism kicked out from under it the cultural and social privilege it had so jealously guarded for the previous five hundred years, there is no question that this is about time, too. Some realignment of artists and the public engaged with art has been overdue, and the distribution of the power of creativity through relatively inexpensive technology connected to the network has been a surprise in its social impact and creative richness. The systematic investigation of technologies for creative purposes has been hampered historically by the absence of software and platforms specifically developed for arts-based usage that seek to do more than replicate existing creative practice. The existence of sensitive graphics tablets, for example, allows a traditional craft like drawing to be done on a computer, but the practice is not transformed until we network up a small group of machines and turn the once solitary and intimate act of drawing into a group enterprise. Much existing creative practice can be treated this way, not by reproducing but by reinterpreting the skills for the network age. Security, in terms of sharing the large data volumes invariably produced by non-text based forms also continues to be a problem, as does the mess of intellectual property rights law designed for a pre-Internet age and now needing urgent revision. One distinction, that of the virtual and real, which once inhibited artists from seeking satisfaction in their expression through the Internet, has steadily been eroded. Whilst Internet presence can come in many shapes and representations, it seems we have all accepted that such experiences where they affect us are undeniably real. “Second Life,” for example, becomes a place for creative exchange because of its potential as an alternative incarnation for the imagination rather than its ability to replicate the first one.
I have suggested here that it is not possible to discuss e-Science without reference to the e-Culture it is spawning, a culture that necessarily includes creative practice, social networking and much else besides. The difficulty, as reported above, is the extent of the cynicism about e-Science as a means of funding. For those who may view it in such a limited fashion, it should be noted that e-Science has successfully redeveloped the research infrastructure and the research environment of higher education in the UK without appearing immediately threatening. In itself it has, in less than ten years, turned from the provision of research infrastructure into a legitimate research aim in itself. Without the resources that e-Science could command but knowing there is in place a helpful infrastructure, e-Arts appears today in a similar state. Much is made about demonstrating evidence of value for the e-Science agenda in the arts and humanities, though the emerging background of an e-Culture is likely to make this question obsolete. What is required is for some form of institutional support to emerge to encourage e-Arts methods of practice, the results of which are bound to take us further into the unknown and unknowable world of human creativity, of which the network may yet prove our most stunning invention.

Works Cited

Foucault 1979 
Foucault, Michel. Discipline and Punish. London: Peregrine Books, 1979.
Hargreaves 1982 
Hargreaves, David. The Challenge of the Comprehensive School. London: Routledge, 1982.
Kuhn 1962 
Kuhn, Thomas. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962.
Snow 1959 
Snow, C.P. The Two Cultures. Cambridge: Cambridge University Press, 1959.
Sporton 2007 Sporton, G. “Building the Wireframe: e-Science for the Art Infrastructure.” AHESSC, http://www.ahessc.ac.uk/files/active/0/BW-report.pdf London (2007).