Some recent-ish publications

Experimental Publishing Compendium

Combinatorial Books: Gathering Flowers (book series)

How To Be A Pirate: An Interview with Alexandra Elbakyan and Gary Hall by Holger Briel’.

'Experimenting With Copyright Licences' (blogpost for the COPIM project - part of the documentation for the first book coming out of the Combinatorial Books pilot)

Review of Bitstreams: The Future of Digital Literary Heritage' by Matthew Kirschenbaum

Contribution to 'Archipiélago Crítico. ¡Formado está! ¡Naveguémoslo!' (invited talk: in Spanish translation with English subtitles)

'Defund Culture' (journal article)

How to Practise the Culture-led Re-Commoning of Cities (printable poster), Partisan Social Club, adjusted by Gary Hall

'Pluriversal Socialism - The Very Idea' (journal article)

'Writing Against Elitism with A Stubborn Fury' (podcast)

'The Uberfication of the University - with Gary Hall' (podcast)

'"La modernidad fue un "blip" en el sistema": sobre teorías y disrupciones con Gary Hall' ['"Modernity was a "blip" in the system": on theories and disruptions with Gary Hall']' (press interview in Colombia)

'Combinatorial Books - Gathering Flowers', with Janneke Adema and Gabriela Méndez Cota - Part 1; Part 2; Part 3 (blog post)

Open Access

Most of Gary's work is freely available to read and download either here in Media Gifts or in Coventry University's online repositories PURE here, or in Humanities Commons here

Radical Open Access

Radical Open Access Virtual Book Stand

'"Communists of Knowledge"? A case for the implementation of "radical open access" in the humanities and social sciences' (an MA dissertation about the ROAC by Ellie Masterman). 

Wednesday
Jan122011

On the limits of openness V: there are no digital humanities

Let’s bracket the many questions that can be raised for Deleuze’s thesis on the societies of control (some of which can also be raised for Lyotard’s account of the postmodern condition), and the reasons it has been taken up and used so readily within the contemporary social sciences, and social theory especially.  For the time being, let us pursue a little further the hypothesis that the externalization of knowledge onto computers, databases, servers and the cloud is involved in the constitution of a different form of both society and human subject. 

To what extent do such developments cast the so-called computational turn in the humanities in a rather different light to the celebratory data-fetishism that has come to dominate this rapidly emerging field of late? Is the direct, practical use of techniques and methodologies drawn from computer science and fields related to it here too helping to produce a major alteration in the status and nature of knowledge, and indeed the human subject? I’m thinking not just of the use of tools such as Anthologize,  Delicious, Juxta, Mendeley, Pliny, Prezi and Zotero to structure and disseminate scholarship and learning in the humanities, but also of the generation of dynamic maps of large humanities data sets, and employment of algorithmic techniques to search for and identify patterns in literary, cultural and filmic texts,  as well as the way in which the interactive nature of much digital technology is enabling user data regarding people’s creative activities with this media to be captured, mined and analyzed by humanities scholars.

To be sure, in what seems to be almost the reverse of the situation we saw Lyotard describe, many of those in the humanities - including some of the field’s most radical thinkers - do now appear to be looking increasingly to science (and technology and mathematics) to provide their research with a degree of legitimacy. Witness Franco ‘Bifo’ Berardi’s appeal to ‘the history of modern chemistry on the one hand, and the most recent cognitive theories on the other’, for confirmation of the Compositionist philosophical hypothesis in his 2009 book, The Soul at Work: ‘There is no object, no existent, and no person: only aggregates, temporary atomic compositions, figures that the human eye perceives as stable but that are indeed mutational, transient, frayed and indefinable’. It is this hypothesis, derived from Democritus, that Bifo sees as underpinning the methods of both the Schizoanalysis of Deleuze and Guattari, and the Italian Autonomist theory, on which his own Compositionist philosophy is based. It is interesting however that Bifo should now feel the need to turn, albeit briefly and almost in passing, to science to underpin and confirm it.

Can this turn toward the sciences (if there has indeed been such a turn, which is by no means certain) be regarded as a response on the part of the humanities to the perceived lack of credibility, if not obsolescence, of their metanarratives of legitimation: the life of the spirit and the Enlightenment, but also Marxism, psychoanalysis and so forth? Indeed, are the sciences today to be regarded as answering many humanities questions more convincingly than the humanities themselves?

While ideas of this kind appear just that little bit too neat and symmetrical to be entirely convincing, this so-called ‘scientific turn’ in the humanities has been attributed by some to a crisis of confidence. It is a crisis regarded as having been brought about, if not by the lack of credibility of the humanities’ metanarratives of legitimation exactly, then at least in part by the ‘imperious attitude’ of the sciences. This attitude has led the latter to colonize the humanists’ space in the form of biomedicine, neuroscience, theories of cognition and so on.  Is the turn toward computing just the latest manifestation of, and response to, this crisis of confidence in the humanities?

Can we go even further and ask: is it evidence that certain parts of the humanities are attempting to increase their connection to society; and to the instrumentality and functionality of society especially? Can it merely be a coincidence that such a turn toward computing is gaining momentum at a time when the likes of the UK government is emphasizing the importance of the STMs and withdrawing support and funding for the humanities? Or is one of the reasons all this is happening now because the humanities, like the sciences themselves, are under pressure from government, business, management, industry and increasingly the media to prove they provide value for money in instrumental, functional, performative terms? (Is the interest in computing a strategic decision on the part of some of those in the humanities? As the project of Cohen and Gibb shows, one can get funding from the likes of Google.  In fact, ‘last summer Google awarded $1 million to professors doing digital humanities research’.) 

To what extent, then, is the take up of practical techniques and approaches from computing science providing some areas of the humanities with a means of defending themselves in an era of global economic crisis and severe cuts to higher education, through the transformation of their knowledge and learning into quantities of information - deliverables? Following Federica Frabetti, can we even position the computational turn as an event created precisely to justify such a move on the part of certain elements within the humanities?  And does this mean that, if we don’t simply want to go along with the current movement away from what remains resistant to a general culture of measurement and calculation, and toward a concern to legitimate power and control by optimizing the system’s efficiency, we would be better off using a different term other than ‘digital humanities’? After all, as Frabetti points out, the idea of a computational turn implies that the humanities, thanks to the development of a new generation of powerful computers and digital tools, have somehow become digital, or are in the process of becoming digital, or at least coming to terms with the digital and computing.  Yet what I am attempting to show here by drawing on the philosophy of Lyotard and others, is that the digital is not something that can now be added to the humanities - for the simple reason that the (supposedly pre-digital) humanities can be seen to have had an understanding of, and engagement with, computing and the digital for some time now.


Friday
Dec172010

On the limits of openness IV: why Facebook is not a factory (even though it profits from the exploitation of labour)

Could the move toward supplying ever more research, information and data online for free on an open basis be part of the development of what Gilles Deleuze called a control society?  Here we are no longer subject primarily to those closed, disciplinary modes of power Michel Foucault traced historically in Discipline and Punish, and which govern by means of a dispersed and decentralized ensemble of institutions, instruments, techniques and procedures that operate to produce and regulate subjectivity via the interiorization of the law.  Such disciplinary societies are characterized by vast closed environments - the family, school, barracks, factory and, depending on circumstances, the hospital - each with their own laws, through which the individual ceaselessly passes, one to the other. As Deleuze makes clear in his 'Postscript on the Societies of Control', these disciplinary environments or enclaves are about enclosure, confinement, surveillance: their project is to ‘concentrate’, ‘distribute in space’, ‘order in time’, ‘organise production’, ‘administer life’, ‘compose a productive force within the dimension of space-time whose effect will be greater than the sum of its component forces’ . Above all, it is the prison which serves as the ‘analogical model’ for the closed system of disciplinary societies and the manner in which it produces and organizes subjectivity. Hence Foucault’s question in Discipline and Punish: ‘Is it surprising that prisons resemble factories, schools, barracks, hospitals, which all resemble prisons?’ 

For Deleuze, disciplinary societies reached their peak at the beginning of the 20th century. His contention is that, just as Foucault saw disciplinary societies as having superseded ‘societies of sovereignty’ from the late eighteenth century onwards so, in a process that has accelerated after WWII, social organisation is ceasing to be disciplinary, if it has not happened already.  To an extent that all the closed spaces associated with disciplinary societies are in now crisis: the family is in crisis, the health service is in crisis, the factory system is in crisis.

These disciplinary societies are in the process of being replaced by societies of control. The latter are our ‘immediate future’, Deleuze writes, and contain extremely rapid, free-floating forms of ‘continuous control and instant communication’, as he puts it in 'Control and Becoming', that operate in environments and spaces that are much more fluid and open.  Witness, to provide some 21st century examples, the way in which increases in computer processing capacity and the associated availability of large, complex data sets have enabled a degree of data mining and pattern recognition to be achieved that makes it possible to automatically anticipate and predict – and thus control, albeit in a comparatively open way – actions on the part of the subject, even before they actually take place. Think of Google News aggregating ‘headlines from news sources worldwide’, grouping  similar stories together and displaying them ‘according to each reader's personalized interests’; Last.fm employing scrobbling software to detail the listening habits of users and provide them with personalized selections of music based on their previous listening history;  or the European Media Monitor system of the European Commission’s Joint Research Center which ‘counts the number of stories on a given topic and looks for the names of people and places to create geotagged "clusters" for given events, like food riots in Haiti or political unrest in Zimbabwe. Burgeoning clusters and increasing numbers of stories indicate a topic of growing importance or severity.’

Whereas the enclaves of disciplinary societies – the family, school, factory and so on - are like different moulds or castings into which individuals are placed at different times and which shape and produce their subjectivity that way, the mechanisms of the societies of control are ‘a modulation, like a self-deforming cast that will continuously change from one moment to the other’. Instead of the prison or factory of disciplinary societies, what we have now is the corporation of the control societies which is likened to a spirit or gas:

The factory constituted individuals as a single body to the double advantage of the boss who surveyed each element within the mass and the unions who mobilized a mass resistance; but the corporation constantly presents the brashest rivalry as a healthy form of emulation, an excellent motivational force that opposes individuals against one another [when it comes to negotiating for a higher salary, for example, according to the  modulating principle of individual performance and merit] and runs through each, dividing each within. 

(Gilles Deleuze, ‘Postscript on Control Societies’, Negotiations: 1972-1990 (New York: Columbia University Press, 1995)

Interestingly, given some of the things I wrote earlier about knowledge and learning, this is also the case with the School. Here, too, perpetual training now reigns by means of the introduction of an audit culture, evaluation forms, league-tables, and other forms of monitoring and micro-management; with continuous control, including continuous assessment, training and staff development, replacing the examination.

What’s more, just as the School has been handed over to the corporation in Deleuze’s account so now, I would maintain, has the University. The fundamental transformation in the way universities in England are viewed that has recently been proposed by the Labour government-commissioned Browne Report, and imposed by the Conservative/Liberal Democratic coalition, provides only the latest evidence of this. It is a shift from thinking of the university as a public good financed mainly from public funds, to treating it as a ‘lightly regulated market’ (Collini). A market moreover in which consumer demand, in the form of the choices of individual students over where and what to study, reigns supreme when it comes to deciding where the funding goes, and thus what is offered by competing ‘service providers (i.e. universities)’, which are now required to operate as businesses in order to ‘produce the most effective mix of skills to meet business needs’.  Like the School, the University is thus ‘becoming less and less a closed site differentiated from the workspace as another closed site’  in a process of continuous control that is never-ending. For nothing is left alone for long in a control-based system.  While ‘in the disciplinary societies one was always starting again’, as the individual moved from school, to university, to the factory, in societies of control one can never finish anything, ‘the corporation, the educational system, the armed services being metastable states coexisting in one and the same modulation, like a universal system of deformation’.

It follows that, despite what some of the banners and slogans of those protesting against the marketisation of the higher education system and increase in tuition fees in England have claimed, the contemporary university is not best understood as a factory. Nor, to take another example, is Facebook,  for all the latter’s harnessing of the free labour power generated by social cooperation (Scholz). Facebook’s fluid and relatively open environment, together with its own origins (like Google) in the contemporary university – Facebook was famously invented by a Harvard undergraduate, Mark Zuckerberg - means that it, too, is far closer to Deleuze’s account of the corporation that has replaced the factory in a control society. And, like the university, Facebook can be seen as part of the corresponding reconfiguration of the individual in terms of the dividual and of the mass in terms of coded data that is produced to be controlled:

The disciplinary societies have two poles: the signature that designates the individual, and the number or administrative numeration that indicates his or her position within a mass. This is because the disciplines never saw any incompatibility between these two, and because at the same time power individualizes and masses together, that is, constitutes those over whom it exercises power into a body and molds the individuality of each member of that body…. In the societies of control, on the other hand, what is important is no longer either a signature or a number, but a code: the code is a password… The numerical language of control is made of codes that mark access to information, or reject it. We no longer find ourselves dealing with the mass/individual pair. Individuals have become ‘dividuals’, and masses, samples, data, markets, or ‘banks’. 

(Gilles Deleuze, ‘Postscript on Control Societies’, Negotiations: 1972-1990 (New York: Columbia University Press, 1995)

 

(An earlier version of some of the material provided above appeared in 'Deleuze’s "Postscript on the Societies of Control"’ (with Clare Birchall and Pete Woodbridge), Culture Machine, 11, 2010.)

Wednesday
Dec012010

On the limits of openness III: open government

The global financial crisis that began in 2008 has only served to add further urgency to the belief of many in the UK that the government should relinquish its copyright on all local, regional and  national data collected with tax payers’ money - most vociferously that relating to Parliamentary expenses and the salaries and bonuses of the highest paid employees in the City of London  - and make it freely and openly available to the public by publishing it online, where it can be searched, mined, mapped, graphed, cross-tabulated, visualized, audited, interpreted, analysed and assessed using software tools.  The Guardian newspaper in the UK has even gone so far as to establish a ‘Free Our Data’ campaign to this end. 

From a liberal democratic perspective, freeing publically funded and acquired data like this, whether it is gathered directly in the process of  census collection, or indirectly as part of other activities (crime, healthcare, transport, schools and accident statistics, for example), helps society to perform more efficiently.  It does so not least by virtue of its ability to play a key role in increasing citizen participation and involvement in democracy, and indeed government,  as access to information such as that needed to intervene in public policy is no longer restricted either to the state or to those corporations, institutions, organizations and individuals who have sufficient money and power to acquire it for themselves. 

But neoliberals also support making the data freely and openly available to businesses and the public. They do so on the grounds that it provides a means of achieving the ‘best possible input/output equation’ (Lyotard). In this respect it is of a piece with the emphasis placed by neoliberalism’s audit culture on accountability, transparency, evaluation, measurement and centralised data management: for instance, in Higher Education regarding the impact of research on society and the economy, league tables, teaching standards, contact hours, as well as student drop-out rates, future employment destinations and earning prospects. From this perspective, such openness and communicative transparency is perceived as ensuring greater value for (tax payers’) money, enabling costs to be distributed more effectively, and increasing choice, innovation, enterprise, creativity, competiveness and accountability (over MPs expenses payments for second homes, moat cleaning, duck islands, trouser presses and the like).

Some libertarians have even gone so far as to argue that there is no need to make difficult policy decisions about what data and information it is right to publish online and what to keep secret at all. (Since Prince Harry is funded from the public purse, do the public have the right to access data regarding his blood group and DNA, so it can be determined once and for all that his father is Prince Charles and not James Hewitt?) Instead, we should work toward the kind of situation the science-fiction writer Bruce Sterling proposes. In Shaping Things, his non-fiction book on the future of design, Sterling advocates retaining all data and information, ‘the known, the unknown known, and the unknown unknown’, in large archives and databases equipped with the necessary bandwidth, processing speed and storage capacity, and simply devising search tools and metadata that are accurate, fast and powerful enough to find and access it. 

Yet to have participated in the shift away from questions of truth, justice and what, in The Inhuman, Lyotard places under the headings of ‘heterogeneity,  dissensus, event… the unharmonizable’,  and toward a concern with performativity, measurement and optimising the relation between input and output, one doesn’t need to be a practicing data journalist,  or to have actively contributed to the movements for open access, open data or open government, at all. If you are one of the 1.3 million plus people who have purchased a Kindle, and helped the sale of digital books outpace those of hardbacks on Amazon’s US website, then you have already signed a license agreement allowing the online book retailer - but not academic researchers or the public - to collect, store, mine, analyse and extract economic value from data concerning your personal reading habits for free.  Similarly, if you are one of the 23 million in the UK and 500 million worldwide who use the pass-word protected Facebook social network,  then you are already voluntarily giving your time and labour for free, not only to help its owners, their investors, and other companies make a reputed $1 billion a year from demographically targeted advertising,  but to supply law enforcement agencies with profile data relating to yourself, your family, friends, colleagues and peers they can use in investigations.  Even if you have done neither, you will in all probability have provided the Google technology company with a host of network data and digital traces it can both monetize and give to the police as a result of having mapped your home, digitized your book, or supplied you with free music videos to enjoy via Google Street View,  Google Maps, Google Earth, Google Book Search and YouTube, which Google also owns. And if this shift from open access to Google seems somewhat farfetched, it’s worth remembering that ‘Google has moved to establish, embellish, or replace many core university services such as library databases, search interfaces, and e-mail servers’; and that in fact Universities gave birth to Google,  Google’s PageRank algorithm being little more ‘than an expansion of what is known as citation analysis’.

Obviously, no matter how exciting and enjoyable such activities may be, you don’t have to buy that e-book reader, join that social network or display your personal metrics online, from sexual activity to food consumption, in an attempt to identify patterns in your life – what is called life-tracking or self-tracking.  (Although, actually, a lot of people are quite happy to keep contributing to the networked communities reached by Facebook and YouTube, even though they realise they are being used as free labour and that, in the case of the former, much of what they do cannot be accessed by search engines and web browsers. They just see this as being part of the deal and a reasonable trade-off for the services and experiences that are provided by these companies.) Nevertheless, even if we want to, refusing to take part in this transformation of knowledge and learning into quantities of data, and shift away from questions of what is just and right toward a concern with optimizing the system’s performance, is just not an option for most of us.  It’s not something that can be opted out of by declining to take out a Tesco Club Card, refusing to look for research using Google Scholar, or committing social networking ‘suicide’ and reading print-on-paper books instead.

For one thing, the process of capturing data by means not just of the internet, but a myriad of cameras, sensors and robotic devices, is now so ubiquitous and all pervasive it is impossible to avoid being caught up in it, no matter how rich, knowledgable and technologically proficient you are.  It’s regularly said that there are approximately four million cameras in the UK – one for every 14 people, more than any other country  (and that’s without even mentioning means of gathering data that are reputed to be more intrusive still, such as mobile phone GPS location and automatic vehicle number plate recognition). Yet no one really knows how many CCTV cameras are actually in operation in Britain today. (In fact the above statistic is reputed to have been based merely ‘on a dubious extrapolation from the number of cameras in London’s Putney High Street in 2002’.) 

For another, and as the example of CCTV illustrates, it’s not necessarily a question of actively doing something in this respect. It’s not a matter of positively contributing free labour to the likes of Flickr and YouTube, for instance; or of refusing to do so. Nor is it a case of the separation between work and non-work being harder to maintain nowadays. (Is it work or leisure when you’re writing a status update on Facebook, posting a photograph, ‘friending’ someone, interacting, detailing your ‘likes’ and ‘dislikes’ regarding the places you eat, the films you watch, the books you read?) As Gilles Deleuze and Felix Guattari pointed out some time ago, ‘surplus labor no longer requires labor... one may furnish surplus-value without doing any work’, or anything that even remotely resembles work for that matter, at least as it is most commonly understood:

In these new conditions, it remains true that all labour involves surplus labor; but surplus labor no longer requires labor. Surplus labor, capitalist organization in its entirety, operates less and less by the striation of space-time corresponding to the physicosocial concept of work. Rather, it is as though human alienation through surplus labor were replaced by a generalized ‘machinic enslavement’, such that one may furnish surplus-value without doing any work (children, the retired, the unemployed, television viewers, etc.). Not only does the user as such tend to become an employee, but capitalism operates less on a quantity of labor than by a complex qualitative process bringing into play modes of transportation, urban models, the media, the entertainment industries, ways of perceiving and feeling – every semiotic system. It is as though, at the outcome of the striation that capitalism was able to carry to an unequalled point of perfection, circulating capital necessarily recreated, reconstituted, a sort of smooth space in which the destiny of human beings is recast. 

(Gilles Deleuze and Felix Guattari, A Thousand Plateaus: Capitalism and Schizophrenia (London: Athlone, 1988) p.492)

So as the above two examples show, this transformation of knowledge and information into quantities of data is not something that can actually be opted out of, since it’s not something that is necessarily opted into.

But there is a further and related reason all this data capturing, storing and mining cannot be simply opted out of or resisted via facilities such as Google Dashboard,  which allows people to see all the data Google has about them, or by reporting objectionable content,  as it’s possible to do in the case of Google Street View providing you’re knowledgeable enough. This is that too often such notions of refusal and active resistance (like their counterparts to do with ideas of privacy, civil rights and liberties)  have their basis in a conception of the autonomous, fully-conscious, rational, self-identical and self-present individual humanist subject that these changes in media and technology may be in the process of helping to reconfigure. As a result, they risk overlooking the possibility that computers, databases, archives,  servers, blogs, microblogs, RSS feeds, image and video-sharing, social networking and ‘the cloud’ are not just being used to change the status and nature of knowledge; they may be involved in the constitution of a very different form of human subject too. 

Wednesday
Nov242010

On the limits of openness II: from open access to open data

In ‘On the limits of openness I’ (see below), I argued that in order to gain an appreciation of what the humanities can become in an era of digital media technology, we would be better advised turning for assistance, not to computing science, but to the writers, poets, historians, literary critics, theorists and philosophers of the humanities. Let me explain what I mean.

Thirty years ago the philosopher Jean-François Lyotard was able to show how science, lacking the resources to legitimate itself as true, had, since its beginnings with Plato, relied for its legitimacy on precisely the kind of knowledge it did not even consider to be knowledge: non-scientific narrative knowledge. Specifically, science legitimated itself by producing a discourse called philosophy. It was philosophy’s role to generate a discourse of legitimation for science. Lyotard proceeded to define as modern any science that legitimated itself in this way by means of a metadiscourse which explicitly appealed to a grand narrative of some sort: the life of the spirit, the Enlightenment, progress, modernity, the emancipation of humanity, the realisation of the Idea, and so on.

What makes Lyotard’s analysis so significant with respect to the emergence of the digital humanities and the computational turn is that his intention was not to position philosophy as being able to tell us as much, if not more, about science than science itself. It was rather to emphasize that, in a process of transformation that had been taking place since at least the end of the 1950s, such long-standing metanarratives of legitimation had now themselves become obsolete.

So what happens to science when the philosophical metanarratives that legitimate it are no longer credible?   Lyotard’s answer, at least in part, was that science was increasing its connection to society, especially the instrumentality and functionality of society (as opposed to a notion of, say, ‘public service’). Science was doing so by helping to legitimate the power of States, companies and multinational corporations by optimizing the relationship ‘between input and output’, between what is put into the social system and what is got out of it, in order to get more from less. ‘Performativity’, in other words.

It is at this point that we return directly to the subject of computers and computing. For Lyotard, writing in 1979, technological transformations in research and the transmission of acquired learning in the most highly developed societies, including the widespread use of computers and databases and the ‘miniaturization and commercialization of machines’, were already in the process of exteriorizing knowledge in relation to the ‘knower’. Lyotard saw this general transformation and exteriorization as leading to a major alteration in the status and nature of knowledge: away from a concern with ‘the true, the just, or the beautiful, etc.’, with ideals, with knowledge as an end in itself, and precisely toward a concern with improving the social system’s performance, its efficiency.  So much so that, for Lyotard:

The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The ‘producers’ and users of knowledge must now, and will have to, possess the means of translating into these language whatever they want to invent or learn. Research on translating machines is already well advanced. Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge’ statements.

(Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge (Manchester: Manchester University Press, 1986) p.4)

Scroll down 30 years and we do indeed find a lot discourses in the sciences today taken up with exteriorizing knowledge and information in order to achieve ‘the best possible performance’ by eliminating delays and inefficiencies and solving technical problems. So we have John Houghton’s 2009 study showing that the open access academic publishing model championed most vociferously in the sciences, whereby peer reviewed scholarly research and publications are made available for free online to all those who are able to access the Internet, without the need to pay subscriptions either to publish or to (pay per)view it, is actually the most cost effective mechanism for scholarly publishing.  Others have detailed at length the increases open access publishing and the related software makes possible in the amount of research material that can be published, searched and stored, the number of people who can access to it, the impact of that material, the range of its distribution, and the speed and ease of reporting and information retrieval, leading to what one of the leaders of the open access movement, Peter Suber, has described as ‘better metrics’. 

One highly influential open access publisher, the Public Library of Science (PLoS), is, with their PLoS Currents: Influenza website, even experimenting with publishing scientific research online before it has undergone in-depth peer review. PLoS are justifying this experiment on the grounds that it enables ideas, results and data to be disseminated as rapidly as possible.  But they are far from alone in making such an argument. Along with full, finished, peer-reviewed texts, more and more researchers in the sciences are making the email, blog, website or paper in which an idea is first expressed openly available online, together with any drafts, working papers, beta, pre-print or grey literature that have been produced and circulated to garner comments from peers and interested parties.  Like PLoS, these scientists perceive doing so as a way of disseminating their research earlier and faster, and therefore increasing its visibility, use, impact, citation count and so on. They also regard it as a means of breaking down much of the culture of secrecy that surrounds scientific research, and as helping to build networks and communities around their work by in effect saying to others, both inside and outside the academy, ‘it’s not finished, come and help us with it!’ Such crowd-sourcing opportunities are in turn held as leading to further increases in their work’s visibility, use, impact, citation counts, prestige and so on, thus optimizing the ratio between minimal input and maximum output still further.

Nor is it just the research literature itself that is being rendered accessible by scientists in this way. Even the data that is created in the course of scientific research is being made freely and openly available for others to use, analyse and build upon. Known as Open Data, this initiative is motivated by more than an  awareness that data is the main research output in many fields.  In the words of another of the leading advocates for open access, Alma Swan, publishing data online on an open basis bestows it with a ‘vastly increased utility’: digital data sets are ‘easily passed around’; they are ‘more easily reused’; and they contain more ‘opportunities for educational and commercial exploitation’. 

Some academic publishers are viewing the linking of their journals to the underlying data as another of their ‘value-added’ services to set alongside automatic alerting and sophisticated citation, indexing, searching and linking facilities (and to help ward off the threat of disintermediation posed by the development of digital technology, which makes it possible for academics to take over the means of dissemination and publish their work for and by themselves cheaply and easily). In fact a 2009 JISC open science report identified ‘open-ness, predictive science based on massive data volumes and citizen involvement as [all] being important features of tomorrow’s research practice’.

In a further move in this direction, all seven PLoS journals are now providing a broad range of article level metrics and indicators relating to usage data on an open basis. No longer withheld as ‘trade secrets’, these metrics measure which articles are attracting the most views, citations from the scholarly literature, social bookmarks, coverage in the media, comments, responses, notes, ‘Star’ ratings, blog coverage, etc

PLoS has positioned this programme as enabling science scholars to assess ‘research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published’, and they encourage readers to carry out their own analyses of this open data. Yet it is difficult not to see article level metrics as also being part of the wider process of transforming knowledge and learning into ‘quantities of information’, as Lyotard puts it; quantities, furthermore, that are produced more to be exchanged, marketed and sold – for example, by individual academics to their departments, institutions, funders and governments in the form of indictors of ‘quality’ and ‘impact’ - than for their ‘“use-value’”. 

The requirement to have visibility, to show up in the metrics, to be measurable, nowadays encourages researchers to publish a lot and frequently. So much so that the peer-reviewed academic journal article has been positioned by some as having now assumed ‘a single central value, not that of bringing something new to the field but that of assessing the person’s research, with a view to hiring, promotion, funding, and, more and more, avoiding termination.’  In such circumstances, as Lyotard makes clear, ‘[i]t is not hard to visualize learning circulating along the same lines as money, instead of for its “educational” value or political (administrative, diplomatic, military) importance’. To the extent that it is even possible to say that, just as money has become a source of virtual value and speculation in the era of American-led neoliberal global finance capital, so too has education, research and publication. And we all know what happened when money became virtual.

Friday
Nov192010

On the limits of openness I: the digital humanities and the computational turn to data-driven scholarship

The digital humanities can be broadly understood as embracing all those scholarly activities in the humanities that involve writing about digital media and technology, and being engaged in processes of digital media production, practice and analysis. For example, developing new media theory, creating interactive electronic archives and literature, building online databases and wikis, producing virtual art galleries and museums, or exploring how various technologies reshape teaching and research.  Yet this field - or, better, constellation of fields - is neither unified nor self-identical. If anything, the digital humanities are comprised of a wide range of often conflicting attitudes, approaches and practices that are being negotiated and employed in a variety of different contexts.

In what follows my interest is not so much with the ongoing debate as to how precisely the digital humanities are to be defined and understood, but with an aspect of this emergent movement that appears to becoming increasingly dominant. So much so that for some it is rapidly coming to stand in for, or be equated with, the digital humanities as a whole.  This is the so-called ‘computational turn’ in the humanities. 

The latter phrase has been adopted to refer to the process whereby techniques and methodologies drawn from computer science and related fields, including interactive information visualisation, statistical data analysis, science visualization, image processing, network analysis, and the management, manipulation and mining of data, are being increasingly used to produce new ways of approaching and understanding texts in the humanities.  Indeed, thanks to increases in computer processing power and its affordability over the last few years, together with the sheer amount of cultural material that is now available in digital form, number-crunching software is being applied to millions of humanities texts in this way.

Before going any further I want to make it clear that it is not my intention to equate this computational turn with the digital humanities per se. Even if the latter is sometimes known as Humanities Computing - or as a transition between the so-called ‘traditional humanities’ and Humanities Computing  - what is coming to be called the digital humanities and this computational turn in the humanities are not one and the same thing as far as I am concerned.

In fact, far from equating the digital humanities with the computational turn, I want to insist on the importance of maintaining a difference between them, certainly for any understanding of what the humanities can become in an era of digital media technology. For, to date (and I acknowledge it is still relatively early days), the traffic in this computational turn has been rather one-way. As the phrase suggests, it has primarily been about exploring what direct practical uses computer science can be to the humanities in terms of performing computations on sets and flows of data that are often so large that,  in the words of the Digging Into Data Challenge, ‘they can be processed only using computing resources and computational methods’.   In the main the concern has been with either digitizing ‘born analog’ humanities texts and artifacts, or gathering together ‘born digital’ humanities texts and artifacts – videos, websites, games, photography, sound recordings, 3D data - and then taking complex and often extremely large-scale data analysis techniques from computing science and related fields and applying them to these humanities texts and artifacts.  So we have the likes of Dan Cohen and Fred Gibb’s text mining of ‘the 1,681,161 books that were published in English in the UK in the long nineteenth century’ (according to Google at least);  Lev Manovich and the Software Studies Initiative’s use of ‘software to analyze and visualize... 4535 Time magazine covers... 1074790 manga pages, and 1100+ 20th century feature films’; or Stefanie Posavec’s Literary Organism which visualizes the structure of Part One of On the Road as a tree.   

Yet just as interesting as what computer science has to offer the humanities, I believe, is the question of what the humanities - in both their digital and ‘traditional’ guises (assuming they can be distingished in this way, which is by no means certain) - have to offer computer science; and, beyond that, what the humanities themselves can bring to the understanding of computing and the shaping of the digital. Do the humanities really need to draw quite so heavily on computer science to develop a sense of what they can be in an era of digital media technology?  Along with a computational turn in the humanities, might we not also benefit from a humanities turn in our understanding of the computational and the digital?

To be sure, one of the interesting things about computer science is that, as Mark Poster pointed out some time ago, it was the first case where ‘a scientific field was established that focuses on a machine’ rather than on an aspect of nature or culture. Yet more interesting still is the way Poster was able to demonstrate that the relation to this machine in computer science is actually one of misrecognition, with the computer occupying ‘the position of the imaginary’ and being ‘inscribed with transcendent status’.  This misidentification on the part of computer science has significant implications for our response to the computational turn. It suggests computer science is not all that well equipped to understand itself and its own founding object, let alone help the humanities’ with their relation to computing and the digital.

In fact, counter-intuitive though it may seem, if what we are seeking is an appreciation of what the humanities can become in an era of digital media technology and data-driven scholarship,  we would be better advised looking elsewhere for assistance, other than primarily with computing science and engineering, science and technology, or even science in general. I almost hesitate to say it in the present political climate - although it is important to do so for precisely this reason - but we would be better off turning to the writers, poets, historians, literary critics, theorists and philosophers of the humanities right from the start.