Some recent-ish publications

Experimental Publishing Compendium

Combinatorial Books: Gathering Flowers (book series)

How To Be A Pirate: An Interview with Alexandra Elbakyan and Gary Hall by Holger Briel’.

'Experimenting With Copyright Licences' (blogpost for the COPIM project - part of the documentation for the first book coming out of the Combinatorial Books pilot)

Review of Bitstreams: The Future of Digital Literary Heritage' by Matthew Kirschenbaum

Contribution to 'Archipiélago Crítico. ¡Formado está! ¡Naveguémoslo!' (invited talk: in Spanish translation with English subtitles)

'Defund Culture' (journal article)

How to Practise the Culture-led Re-Commoning of Cities (printable poster), Partisan Social Club, adjusted by Gary Hall

'Pluriversal Socialism - The Very Idea' (journal article)

'Writing Against Elitism with A Stubborn Fury' (podcast)

'The Uberfication of the University - with Gary Hall' (podcast)

'"La modernidad fue un "blip" en el sistema": sobre teorías y disrupciones con Gary Hall' ['"Modernity was a "blip" in the system": on theories and disruptions with Gary Hall']' (press interview in Colombia)

'Combinatorial Books - Gathering Flowers', with Janneke Adema and Gabriela Méndez Cota - Part 1; Part 2; Part 3 (blog post)

Open Access

Most of Gary's work is freely available to read and download either here in Media Gifts or in Coventry University's online repositories PURE here, or in Humanities Commons here

Radical Open Access

Radical Open Access Virtual Book Stand

'"Communists of Knowledge"? A case for the implementation of "radical open access" in the humanities and social sciences' (an MA dissertation about the ROAC by Ellie Masterman). 

Thursday
Jan272011

On the limits of openness VI: has critical theory run out of time for data-driven scholarship?

Something that is particularly noticeable about many instances of this turn to data-driven scholarship - especially after decades when the humanities have been heavily marked by a variety of critical theories: Marxism, feminism, psychoanalysis, structuralism, post-colonialism, post-Marxism - is just how difficult they find it to understand computing and the digital as anything more than tools and techniques, and thus how naive and lacking in meaningful critique they often are (Higgen).  Of course, this (at times explicit) repudiation of criticality could be viewed as part of what makes certain aspects of the digital humanities so intriguing at the moment. Exponents of the computational turn are precisely not making what I have elsewhere characterised as the anti-political gesture of conforming to accepted (and frequently moralistic) conceptions of politics that have been decided in advance, including those which see it only in terms of power, ideology, race, gender, class, sexuality, ecology, affect and so forth. They are responding to what is perceived as a fundamentally new cultural situation, and the challenge it represents to our traditional methods of studying culture, by avoiding such conventional gestures, and experimenting with the development of fresh methods and approaches for the humanities instead.

In a series of posts on his Found History blog, Tom Scheinfeldt, Managing Director at the Center for History and New Media at George Mason University, positions such scholarship very much in terms of a shift from a concern with theory and ideology to a concern with methodology:

I believe... we are entering a new phase of scholarship that will be dominated not by ideas, but once again by organizing activities, both in terms of organizing knowledge and organizing ourselves and our work... as a digital historian, I traffic much less in new theories than in new methods. The new technology of the Internet has shifted the work of a rapidly growing number of scholars away from thinking big thoughts to forging new tools, methods, materials, techniques, and modes or work which will enable us to harness the still unwieldy, but obviously game-changing, information technologies now sitting on our desktops and in our pockets.

In this respect there may well be a degree of ‘relief in having escaped the culture wars of the 1980s’ - for those in the US especially - as a result of this move ‘into the space of methodological work’ (Croxall) and what Scheinfeldt reportedly dubs ‘the post-theoretical age’.  The problem is, without such reflexive critical thinking and theories many of those whose work forms part of this computational turn find it difficult to articulate exactly what the point of what they are doing is, as Scheinfeldt readily acknowledges.

Witness one of the projects I mentioned earlier: the attempt by Dan Cohen and Fred Gibb to text-mine all the books published in English in the Victorian age (or at least those digitized by Google).   Among other things, this allows Cohen and Gibb to show that use of the word ‘revolution’ in book titles of the period spiked around ‘the French Revolution and the revolutions of 1848’. But what argument is it that they are trying to make with this? What is it we are able to learn as a result of this use of computational power on their part that we didn’t know already and couldn’t have discovered without it? 

Elsewhere, in a response to Cohen and Gibb’s project, Scheinfeldt suggests that the problem of theory, or the lack of it, may actually be a matter of scale and timing:

It expects something of the scale of humanities scholarship which I’m not sure is true anymore: that a single scholar—nay, every scholar—working alone will, over the course of his or her lifetime ... make a fundamental theoretical advance to the field.

Increasingly, this expectation is something peculiar to the humanities.  ...it required the work of a generation of mathematicians and observational astronomers, gainfully employed, to enable the eventual “discovery” of Neptune… Since the scientific revolution, most theoretical advances play out over generations, not single careers. We don’t expect all of our physics graduate students to make fundamental theoretical breakthroughs or claims about the nature of quantum mechanics, for example. There is just too much lab work to be done and data to analyzed for each person to be pointed at the end point. That work is valued for the incremental contribution to the generational research agenda that it is.

Yet notice how theory is again being marginalized in favour of an emphasis on  STM subjects, and the adoption of expectations and approaches associated with mathematicians and astronomers in particular.

This not to deny the importance of experimenting with the new kinds of knowledge, tools, methods, materials and modes of work and thinking digital media technologies create and make possible, in order to bring new forms of Foucauldian dispositifs, what Bernard Steigler calls hypomnémata (i.e. mnemonics, what Plato referred to as pharmaka, both poisons and cures), or what I am trying to think here in terms of media gifts, into play.  And I would potentially include in this process of experimentation techniques and methodologies drawn from computer science and other related fields such as information visualisation, data mining and so forth. Yes, of course, it is quite possible that in the future ‘people will use this data in ways we can’t even imagine yet’, both singularly and collaboratively (Stowell).  Still, there is something intriguing about the way in which many defenders of the turn toward computational tools and methods in the humanities evoke a sense of time in relation to theory.

Take the argument that critical and self-reflexive theoretical questions about the use of digital tools and data-led methodologies should be deferred for the time being, lest they have the effect of strangling at birth what could turn out to be a very different form of humanities research before it has had a chance to properly develop and take shape. Viewed in isolation, it can be difficult, if not impossible, to decide whether this particular form of ‘limitless' postponement is serving as an alibi for a naive and rather superficial  form of scholarship; or whether it is indeed acting as a responsible, political or ethical opening to the (difference, heterogeneity and incalculability of the) future, including the future of the humanities. After all, the suggestion is that now is ‘not the right time’ to be making any such decision or judgement, since we cannot ‘yet’ know how humanists will ‘eventually’ come to use these tools and data, and thus what data-driven scholarship may or may not turn out to be capable of, critically, politically, theoretically.

This argument would be more convincing as a responsible, political or ethical call to leave the question of the use of digital tools and data-led methodologies in the humanities open if it were the only sense in which time was evoked in relation to theory in this context. Significantly, however, it is not. Advocates for the computational turn do so in a number of other and often competing senses too. These include:

a) that the time of theory is over, in the sense a particular historical period or moment has now ended (e.g. that of the culture wars of the 1980s);

b) that the time for theory is over, in the sense it is now the time for methodology;

c) and that the time to return to theory or for theory to (re-)emerge in some new, unpredictable form which represents a fundamental breakthrough or advance, although possibly on its way, has not arrived yet, and cannot necessarily be expected to do so for some time, given that ‘most theoretical advances play out over generations, not single careers’.

All of which puts a very different inflection on the view of theoretical critique as being at best inappropriate, and at worst harmful to data-driven scholarship. Even a brief glance at the history of theory’s reception in the English-speaking world reveals that those who announce that its time has not yet come, or is already over, that theory is in decline or even dead, and that we now live in a post-theoretical era, are merely endeavouring to keep it at a (temporal) distance. Rather than having to ask rigorous, critical and self-reflexive questions about their own practices and their justifications for them, those who position their work as being either pre- or post-theory are almost invariably doing so because it allows them to continue with their own preferred techniques and methodologies for study culture relatively uncontested. Placed in this wider context, far from helping to keep the question concerning the use of digital tools and data-led methodologies in the humanities open (or having anything particularly interesting to say about theory), the rejection of critical-intellectual ideas as untimely can be seen as moralizing and conservative.

In saying this I am reiterating an argument made by Wendy Brown in the sphere of political theory. Yet can a similar case not be made with regard to the computational turn in the humanities, to the effect that the ‘rebuff of critical theory as untimely provides the core matter for the affirmative case for it’? Theory is vital from this point of view, not for conforming to accepted conceptions of political critique which see it primarily in terms of power, ideology, race, gender, class, sexuality, ecology, affect and so forth, or for sustaining conventional methods of studying culture that may no longer be appropriate to the networked nature of 21st century post-industrial society. Theory is vital ‘to contest the very sense of time invoked to declare critique untimely’:


If the charge of untimeliness inevitably also fixes time, then disrupting this fixity is crucial to keeping the times from closing in on us. It is a way of reclaiming the present from the conservative hold on it that is borne by the charge of untimeliness.

To insist on the value of untimely political critique is not, then, to refuse the problem of time and timing in politics but rather to contest settled accounts of what time is, what the times are, and what political tempo and temporality we should hew to in political life. 

(Wendy Brown, Edgework: Critical Essays on Knowledge and Politics (Princeton and Oxford: Princeton University Press, 2005) p.4)

Wednesday
Jan122011

On the limits of openness V: there are no digital humanities

Let’s bracket the many questions that can be raised for Deleuze’s thesis on the societies of control (some of which can also be raised for Lyotard’s account of the postmodern condition), and the reasons it has been taken up and used so readily within the contemporary social sciences, and social theory especially.  For the time being, let us pursue a little further the hypothesis that the externalization of knowledge onto computers, databases, servers and the cloud is involved in the constitution of a different form of both society and human subject. 

To what extent do such developments cast the so-called computational turn in the humanities in a rather different light to the celebratory data-fetishism that has come to dominate this rapidly emerging field of late? Is the direct, practical use of techniques and methodologies drawn from computer science and fields related to it here too helping to produce a major alteration in the status and nature of knowledge, and indeed the human subject? I’m thinking not just of the use of tools such as Anthologize,  Delicious, Juxta, Mendeley, Pliny, Prezi and Zotero to structure and disseminate scholarship and learning in the humanities, but also of the generation of dynamic maps of large humanities data sets, and employment of algorithmic techniques to search for and identify patterns in literary, cultural and filmic texts,  as well as the way in which the interactive nature of much digital technology is enabling user data regarding people’s creative activities with this media to be captured, mined and analyzed by humanities scholars.

To be sure, in what seems to be almost the reverse of the situation we saw Lyotard describe, many of those in the humanities - including some of the field’s most radical thinkers - do now appear to be looking increasingly to science (and technology and mathematics) to provide their research with a degree of legitimacy. Witness Franco ‘Bifo’ Berardi’s appeal to ‘the history of modern chemistry on the one hand, and the most recent cognitive theories on the other’, for confirmation of the Compositionist philosophical hypothesis in his 2009 book, The Soul at Work: ‘There is no object, no existent, and no person: only aggregates, temporary atomic compositions, figures that the human eye perceives as stable but that are indeed mutational, transient, frayed and indefinable’. It is this hypothesis, derived from Democritus, that Bifo sees as underpinning the methods of both the Schizoanalysis of Deleuze and Guattari, and the Italian Autonomist theory, on which his own Compositionist philosophy is based. It is interesting however that Bifo should now feel the need to turn, albeit briefly and almost in passing, to science to underpin and confirm it.

Can this turn toward the sciences (if there has indeed been such a turn, which is by no means certain) be regarded as a response on the part of the humanities to the perceived lack of credibility, if not obsolescence, of their metanarratives of legitimation: the life of the spirit and the Enlightenment, but also Marxism, psychoanalysis and so forth? Indeed, are the sciences today to be regarded as answering many humanities questions more convincingly than the humanities themselves?

While ideas of this kind appear just that little bit too neat and symmetrical to be entirely convincing, this so-called ‘scientific turn’ in the humanities has been attributed by some to a crisis of confidence. It is a crisis regarded as having been brought about, if not by the lack of credibility of the humanities’ metanarratives of legitimation exactly, then at least in part by the ‘imperious attitude’ of the sciences. This attitude has led the latter to colonize the humanists’ space in the form of biomedicine, neuroscience, theories of cognition and so on.  Is the turn toward computing just the latest manifestation of, and response to, this crisis of confidence in the humanities?

Can we go even further and ask: is it evidence that certain parts of the humanities are attempting to increase their connection to society; and to the instrumentality and functionality of society especially? Can it merely be a coincidence that such a turn toward computing is gaining momentum at a time when the likes of the UK government is emphasizing the importance of the STMs and withdrawing support and funding for the humanities? Or is one of the reasons all this is happening now because the humanities, like the sciences themselves, are under pressure from government, business, management, industry and increasingly the media to prove they provide value for money in instrumental, functional, performative terms? (Is the interest in computing a strategic decision on the part of some of those in the humanities? As the project of Cohen and Gibb shows, one can get funding from the likes of Google.  In fact, ‘last summer Google awarded $1 million to professors doing digital humanities research’.) 

To what extent, then, is the take up of practical techniques and approaches from computing science providing some areas of the humanities with a means of defending themselves in an era of global economic crisis and severe cuts to higher education, through the transformation of their knowledge and learning into quantities of information - deliverables? Following Federica Frabetti, can we even position the computational turn as an event created precisely to justify such a move on the part of certain elements within the humanities?  And does this mean that, if we don’t simply want to go along with the current movement away from what remains resistant to a general culture of measurement and calculation, and toward a concern to legitimate power and control by optimizing the system’s efficiency, we would be better off using a different term other than ‘digital humanities’? After all, as Frabetti points out, the idea of a computational turn implies that the humanities, thanks to the development of a new generation of powerful computers and digital tools, have somehow become digital, or are in the process of becoming digital, or at least coming to terms with the digital and computing.  Yet what I am attempting to show here by drawing on the philosophy of Lyotard and others, is that the digital is not something that can now be added to the humanities - for the simple reason that the (supposedly pre-digital) humanities can be seen to have had an understanding of, and engagement with, computing and the digital for some time now.


Friday
Dec172010

On the limits of openness IV: why Facebook is not a factory (even though it profits from the exploitation of labour)

Could the move toward supplying ever more research, information and data online for free on an open basis be part of the development of what Gilles Deleuze called a control society?  Here we are no longer subject primarily to those closed, disciplinary modes of power Michel Foucault traced historically in Discipline and Punish, and which govern by means of a dispersed and decentralized ensemble of institutions, instruments, techniques and procedures that operate to produce and regulate subjectivity via the interiorization of the law.  Such disciplinary societies are characterized by vast closed environments - the family, school, barracks, factory and, depending on circumstances, the hospital - each with their own laws, through which the individual ceaselessly passes, one to the other. As Deleuze makes clear in his 'Postscript on the Societies of Control', these disciplinary environments or enclaves are about enclosure, confinement, surveillance: their project is to ‘concentrate’, ‘distribute in space’, ‘order in time’, ‘organise production’, ‘administer life’, ‘compose a productive force within the dimension of space-time whose effect will be greater than the sum of its component forces’ . Above all, it is the prison which serves as the ‘analogical model’ for the closed system of disciplinary societies and the manner in which it produces and organizes subjectivity. Hence Foucault’s question in Discipline and Punish: ‘Is it surprising that prisons resemble factories, schools, barracks, hospitals, which all resemble prisons?’ 

For Deleuze, disciplinary societies reached their peak at the beginning of the 20th century. His contention is that, just as Foucault saw disciplinary societies as having superseded ‘societies of sovereignty’ from the late eighteenth century onwards so, in a process that has accelerated after WWII, social organisation is ceasing to be disciplinary, if it has not happened already.  To an extent that all the closed spaces associated with disciplinary societies are in now crisis: the family is in crisis, the health service is in crisis, the factory system is in crisis.

These disciplinary societies are in the process of being replaced by societies of control. The latter are our ‘immediate future’, Deleuze writes, and contain extremely rapid, free-floating forms of ‘continuous control and instant communication’, as he puts it in 'Control and Becoming', that operate in environments and spaces that are much more fluid and open.  Witness, to provide some 21st century examples, the way in which increases in computer processing capacity and the associated availability of large, complex data sets have enabled a degree of data mining and pattern recognition to be achieved that makes it possible to automatically anticipate and predict – and thus control, albeit in a comparatively open way – actions on the part of the subject, even before they actually take place. Think of Google News aggregating ‘headlines from news sources worldwide’, grouping  similar stories together and displaying them ‘according to each reader's personalized interests’; Last.fm employing scrobbling software to detail the listening habits of users and provide them with personalized selections of music based on their previous listening history;  or the European Media Monitor system of the European Commission’s Joint Research Center which ‘counts the number of stories on a given topic and looks for the names of people and places to create geotagged "clusters" for given events, like food riots in Haiti or political unrest in Zimbabwe. Burgeoning clusters and increasing numbers of stories indicate a topic of growing importance or severity.’

Whereas the enclaves of disciplinary societies – the family, school, factory and so on - are like different moulds or castings into which individuals are placed at different times and which shape and produce their subjectivity that way, the mechanisms of the societies of control are ‘a modulation, like a self-deforming cast that will continuously change from one moment to the other’. Instead of the prison or factory of disciplinary societies, what we have now is the corporation of the control societies which is likened to a spirit or gas:

The factory constituted individuals as a single body to the double advantage of the boss who surveyed each element within the mass and the unions who mobilized a mass resistance; but the corporation constantly presents the brashest rivalry as a healthy form of emulation, an excellent motivational force that opposes individuals against one another [when it comes to negotiating for a higher salary, for example, according to the  modulating principle of individual performance and merit] and runs through each, dividing each within. 

(Gilles Deleuze, ‘Postscript on Control Societies’, Negotiations: 1972-1990 (New York: Columbia University Press, 1995)

Interestingly, given some of the things I wrote earlier about knowledge and learning, this is also the case with the School. Here, too, perpetual training now reigns by means of the introduction of an audit culture, evaluation forms, league-tables, and other forms of monitoring and micro-management; with continuous control, including continuous assessment, training and staff development, replacing the examination.

What’s more, just as the School has been handed over to the corporation in Deleuze’s account so now, I would maintain, has the University. The fundamental transformation in the way universities in England are viewed that has recently been proposed by the Labour government-commissioned Browne Report, and imposed by the Conservative/Liberal Democratic coalition, provides only the latest evidence of this. It is a shift from thinking of the university as a public good financed mainly from public funds, to treating it as a ‘lightly regulated market’ (Collini). A market moreover in which consumer demand, in the form of the choices of individual students over where and what to study, reigns supreme when it comes to deciding where the funding goes, and thus what is offered by competing ‘service providers (i.e. universities)’, which are now required to operate as businesses in order to ‘produce the most effective mix of skills to meet business needs’.  Like the School, the University is thus ‘becoming less and less a closed site differentiated from the workspace as another closed site’  in a process of continuous control that is never-ending. For nothing is left alone for long in a control-based system.  While ‘in the disciplinary societies one was always starting again’, as the individual moved from school, to university, to the factory, in societies of control one can never finish anything, ‘the corporation, the educational system, the armed services being metastable states coexisting in one and the same modulation, like a universal system of deformation’.

It follows that, despite what some of the banners and slogans of those protesting against the marketisation of the higher education system and increase in tuition fees in England have claimed, the contemporary university is not best understood as a factory. Nor, to take another example, is Facebook,  for all the latter’s harnessing of the free labour power generated by social cooperation (Scholz). Facebook’s fluid and relatively open environment, together with its own origins (like Google) in the contemporary university – Facebook was famously invented by a Harvard undergraduate, Mark Zuckerberg - means that it, too, is far closer to Deleuze’s account of the corporation that has replaced the factory in a control society. And, like the university, Facebook can be seen as part of the corresponding reconfiguration of the individual in terms of the dividual and of the mass in terms of coded data that is produced to be controlled:

The disciplinary societies have two poles: the signature that designates the individual, and the number or administrative numeration that indicates his or her position within a mass. This is because the disciplines never saw any incompatibility between these two, and because at the same time power individualizes and masses together, that is, constitutes those over whom it exercises power into a body and molds the individuality of each member of that body…. In the societies of control, on the other hand, what is important is no longer either a signature or a number, but a code: the code is a password… The numerical language of control is made of codes that mark access to information, or reject it. We no longer find ourselves dealing with the mass/individual pair. Individuals have become ‘dividuals’, and masses, samples, data, markets, or ‘banks’. 

(Gilles Deleuze, ‘Postscript on Control Societies’, Negotiations: 1972-1990 (New York: Columbia University Press, 1995)

 

(An earlier version of some of the material provided above appeared in 'Deleuze’s "Postscript on the Societies of Control"’ (with Clare Birchall and Pete Woodbridge), Culture Machine, 11, 2010.)

Wednesday
Dec012010

On the limits of openness III: open government

The global financial crisis that began in 2008 has only served to add further urgency to the belief of many in the UK that the government should relinquish its copyright on all local, regional and  national data collected with tax payers’ money - most vociferously that relating to Parliamentary expenses and the salaries and bonuses of the highest paid employees in the City of London  - and make it freely and openly available to the public by publishing it online, where it can be searched, mined, mapped, graphed, cross-tabulated, visualized, audited, interpreted, analysed and assessed using software tools.  The Guardian newspaper in the UK has even gone so far as to establish a ‘Free Our Data’ campaign to this end. 

From a liberal democratic perspective, freeing publically funded and acquired data like this, whether it is gathered directly in the process of  census collection, or indirectly as part of other activities (crime, healthcare, transport, schools and accident statistics, for example), helps society to perform more efficiently.  It does so not least by virtue of its ability to play a key role in increasing citizen participation and involvement in democracy, and indeed government,  as access to information such as that needed to intervene in public policy is no longer restricted either to the state or to those corporations, institutions, organizations and individuals who have sufficient money and power to acquire it for themselves. 

But neoliberals also support making the data freely and openly available to businesses and the public. They do so on the grounds that it provides a means of achieving the ‘best possible input/output equation’ (Lyotard). In this respect it is of a piece with the emphasis placed by neoliberalism’s audit culture on accountability, transparency, evaluation, measurement and centralised data management: for instance, in Higher Education regarding the impact of research on society and the economy, league tables, teaching standards, contact hours, as well as student drop-out rates, future employment destinations and earning prospects. From this perspective, such openness and communicative transparency is perceived as ensuring greater value for (tax payers’) money, enabling costs to be distributed more effectively, and increasing choice, innovation, enterprise, creativity, competiveness and accountability (over MPs expenses payments for second homes, moat cleaning, duck islands, trouser presses and the like).

Some libertarians have even gone so far as to argue that there is no need to make difficult policy decisions about what data and information it is right to publish online and what to keep secret at all. (Since Prince Harry is funded from the public purse, do the public have the right to access data regarding his blood group and DNA, so it can be determined once and for all that his father is Prince Charles and not James Hewitt?) Instead, we should work toward the kind of situation the science-fiction writer Bruce Sterling proposes. In Shaping Things, his non-fiction book on the future of design, Sterling advocates retaining all data and information, ‘the known, the unknown known, and the unknown unknown’, in large archives and databases equipped with the necessary bandwidth, processing speed and storage capacity, and simply devising search tools and metadata that are accurate, fast and powerful enough to find and access it. 

Yet to have participated in the shift away from questions of truth, justice and what, in The Inhuman, Lyotard places under the headings of ‘heterogeneity,  dissensus, event… the unharmonizable’,  and toward a concern with performativity, measurement and optimising the relation between input and output, one doesn’t need to be a practicing data journalist,  or to have actively contributed to the movements for open access, open data or open government, at all. If you are one of the 1.3 million plus people who have purchased a Kindle, and helped the sale of digital books outpace those of hardbacks on Amazon’s US website, then you have already signed a license agreement allowing the online book retailer - but not academic researchers or the public - to collect, store, mine, analyse and extract economic value from data concerning your personal reading habits for free.  Similarly, if you are one of the 23 million in the UK and 500 million worldwide who use the pass-word protected Facebook social network,  then you are already voluntarily giving your time and labour for free, not only to help its owners, their investors, and other companies make a reputed $1 billion a year from demographically targeted advertising,  but to supply law enforcement agencies with profile data relating to yourself, your family, friends, colleagues and peers they can use in investigations.  Even if you have done neither, you will in all probability have provided the Google technology company with a host of network data and digital traces it can both monetize and give to the police as a result of having mapped your home, digitized your book, or supplied you with free music videos to enjoy via Google Street View,  Google Maps, Google Earth, Google Book Search and YouTube, which Google also owns. And if this shift from open access to Google seems somewhat farfetched, it’s worth remembering that ‘Google has moved to establish, embellish, or replace many core university services such as library databases, search interfaces, and e-mail servers’; and that in fact Universities gave birth to Google,  Google’s PageRank algorithm being little more ‘than an expansion of what is known as citation analysis’.

Obviously, no matter how exciting and enjoyable such activities may be, you don’t have to buy that e-book reader, join that social network or display your personal metrics online, from sexual activity to food consumption, in an attempt to identify patterns in your life – what is called life-tracking or self-tracking.  (Although, actually, a lot of people are quite happy to keep contributing to the networked communities reached by Facebook and YouTube, even though they realise they are being used as free labour and that, in the case of the former, much of what they do cannot be accessed by search engines and web browsers. They just see this as being part of the deal and a reasonable trade-off for the services and experiences that are provided by these companies.) Nevertheless, even if we want to, refusing to take part in this transformation of knowledge and learning into quantities of data, and shift away from questions of what is just and right toward a concern with optimizing the system’s performance, is just not an option for most of us.  It’s not something that can be opted out of by declining to take out a Tesco Club Card, refusing to look for research using Google Scholar, or committing social networking ‘suicide’ and reading print-on-paper books instead.

For one thing, the process of capturing data by means not just of the internet, but a myriad of cameras, sensors and robotic devices, is now so ubiquitous and all pervasive it is impossible to avoid being caught up in it, no matter how rich, knowledgable and technologically proficient you are.  It’s regularly said that there are approximately four million cameras in the UK – one for every 14 people, more than any other country  (and that’s without even mentioning means of gathering data that are reputed to be more intrusive still, such as mobile phone GPS location and automatic vehicle number plate recognition). Yet no one really knows how many CCTV cameras are actually in operation in Britain today. (In fact the above statistic is reputed to have been based merely ‘on a dubious extrapolation from the number of cameras in London’s Putney High Street in 2002’.) 

For another, and as the example of CCTV illustrates, it’s not necessarily a question of actively doing something in this respect. It’s not a matter of positively contributing free labour to the likes of Flickr and YouTube, for instance; or of refusing to do so. Nor is it a case of the separation between work and non-work being harder to maintain nowadays. (Is it work or leisure when you’re writing a status update on Facebook, posting a photograph, ‘friending’ someone, interacting, detailing your ‘likes’ and ‘dislikes’ regarding the places you eat, the films you watch, the books you read?) As Gilles Deleuze and Felix Guattari pointed out some time ago, ‘surplus labor no longer requires labor... one may furnish surplus-value without doing any work’, or anything that even remotely resembles work for that matter, at least as it is most commonly understood:

In these new conditions, it remains true that all labour involves surplus labor; but surplus labor no longer requires labor. Surplus labor, capitalist organization in its entirety, operates less and less by the striation of space-time corresponding to the physicosocial concept of work. Rather, it is as though human alienation through surplus labor were replaced by a generalized ‘machinic enslavement’, such that one may furnish surplus-value without doing any work (children, the retired, the unemployed, television viewers, etc.). Not only does the user as such tend to become an employee, but capitalism operates less on a quantity of labor than by a complex qualitative process bringing into play modes of transportation, urban models, the media, the entertainment industries, ways of perceiving and feeling – every semiotic system. It is as though, at the outcome of the striation that capitalism was able to carry to an unequalled point of perfection, circulating capital necessarily recreated, reconstituted, a sort of smooth space in which the destiny of human beings is recast. 

(Gilles Deleuze and Felix Guattari, A Thousand Plateaus: Capitalism and Schizophrenia (London: Athlone, 1988) p.492)

So as the above two examples show, this transformation of knowledge and information into quantities of data is not something that can actually be opted out of, since it’s not something that is necessarily opted into.

But there is a further and related reason all this data capturing, storing and mining cannot be simply opted out of or resisted via facilities such as Google Dashboard,  which allows people to see all the data Google has about them, or by reporting objectionable content,  as it’s possible to do in the case of Google Street View providing you’re knowledgeable enough. This is that too often such notions of refusal and active resistance (like their counterparts to do with ideas of privacy, civil rights and liberties)  have their basis in a conception of the autonomous, fully-conscious, rational, self-identical and self-present individual humanist subject that these changes in media and technology may be in the process of helping to reconfigure. As a result, they risk overlooking the possibility that computers, databases, archives,  servers, blogs, microblogs, RSS feeds, image and video-sharing, social networking and ‘the cloud’ are not just being used to change the status and nature of knowledge; they may be involved in the constitution of a very different form of human subject too. 

Wednesday
Nov242010

On the limits of openness II: from open access to open data

In ‘On the limits of openness I’ (see below), I argued that in order to gain an appreciation of what the humanities can become in an era of digital media technology, we would be better advised turning for assistance, not to computing science, but to the writers, poets, historians, literary critics, theorists and philosophers of the humanities. Let me explain what I mean.

Thirty years ago the philosopher Jean-François Lyotard was able to show how science, lacking the resources to legitimate itself as true, had, since its beginnings with Plato, relied for its legitimacy on precisely the kind of knowledge it did not even consider to be knowledge: non-scientific narrative knowledge. Specifically, science legitimated itself by producing a discourse called philosophy. It was philosophy’s role to generate a discourse of legitimation for science. Lyotard proceeded to define as modern any science that legitimated itself in this way by means of a metadiscourse which explicitly appealed to a grand narrative of some sort: the life of the spirit, the Enlightenment, progress, modernity, the emancipation of humanity, the realisation of the Idea, and so on.

What makes Lyotard’s analysis so significant with respect to the emergence of the digital humanities and the computational turn is that his intention was not to position philosophy as being able to tell us as much, if not more, about science than science itself. It was rather to emphasize that, in a process of transformation that had been taking place since at least the end of the 1950s, such long-standing metanarratives of legitimation had now themselves become obsolete.

So what happens to science when the philosophical metanarratives that legitimate it are no longer credible?   Lyotard’s answer, at least in part, was that science was increasing its connection to society, especially the instrumentality and functionality of society (as opposed to a notion of, say, ‘public service’). Science was doing so by helping to legitimate the power of States, companies and multinational corporations by optimizing the relationship ‘between input and output’, between what is put into the social system and what is got out of it, in order to get more from less. ‘Performativity’, in other words.

It is at this point that we return directly to the subject of computers and computing. For Lyotard, writing in 1979, technological transformations in research and the transmission of acquired learning in the most highly developed societies, including the widespread use of computers and databases and the ‘miniaturization and commercialization of machines’, were already in the process of exteriorizing knowledge in relation to the ‘knower’. Lyotard saw this general transformation and exteriorization as leading to a major alteration in the status and nature of knowledge: away from a concern with ‘the true, the just, or the beautiful, etc.’, with ideals, with knowledge as an end in itself, and precisely toward a concern with improving the social system’s performance, its efficiency.  So much so that, for Lyotard:

The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The ‘producers’ and users of knowledge must now, and will have to, possess the means of translating into these language whatever they want to invent or learn. Research on translating machines is already well advanced. Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge’ statements.

(Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge (Manchester: Manchester University Press, 1986) p.4)

Scroll down 30 years and we do indeed find a lot discourses in the sciences today taken up with exteriorizing knowledge and information in order to achieve ‘the best possible performance’ by eliminating delays and inefficiencies and solving technical problems. So we have John Houghton’s 2009 study showing that the open access academic publishing model championed most vociferously in the sciences, whereby peer reviewed scholarly research and publications are made available for free online to all those who are able to access the Internet, without the need to pay subscriptions either to publish or to (pay per)view it, is actually the most cost effective mechanism for scholarly publishing.  Others have detailed at length the increases open access publishing and the related software makes possible in the amount of research material that can be published, searched and stored, the number of people who can access to it, the impact of that material, the range of its distribution, and the speed and ease of reporting and information retrieval, leading to what one of the leaders of the open access movement, Peter Suber, has described as ‘better metrics’. 

One highly influential open access publisher, the Public Library of Science (PLoS), is, with their PLoS Currents: Influenza website, even experimenting with publishing scientific research online before it has undergone in-depth peer review. PLoS are justifying this experiment on the grounds that it enables ideas, results and data to be disseminated as rapidly as possible.  But they are far from alone in making such an argument. Along with full, finished, peer-reviewed texts, more and more researchers in the sciences are making the email, blog, website or paper in which an idea is first expressed openly available online, together with any drafts, working papers, beta, pre-print or grey literature that have been produced and circulated to garner comments from peers and interested parties.  Like PLoS, these scientists perceive doing so as a way of disseminating their research earlier and faster, and therefore increasing its visibility, use, impact, citation count and so on. They also regard it as a means of breaking down much of the culture of secrecy that surrounds scientific research, and as helping to build networks and communities around their work by in effect saying to others, both inside and outside the academy, ‘it’s not finished, come and help us with it!’ Such crowd-sourcing opportunities are in turn held as leading to further increases in their work’s visibility, use, impact, citation counts, prestige and so on, thus optimizing the ratio between minimal input and maximum output still further.

Nor is it just the research literature itself that is being rendered accessible by scientists in this way. Even the data that is created in the course of scientific research is being made freely and openly available for others to use, analyse and build upon. Known as Open Data, this initiative is motivated by more than an  awareness that data is the main research output in many fields.  In the words of another of the leading advocates for open access, Alma Swan, publishing data online on an open basis bestows it with a ‘vastly increased utility’: digital data sets are ‘easily passed around’; they are ‘more easily reused’; and they contain more ‘opportunities for educational and commercial exploitation’. 

Some academic publishers are viewing the linking of their journals to the underlying data as another of their ‘value-added’ services to set alongside automatic alerting and sophisticated citation, indexing, searching and linking facilities (and to help ward off the threat of disintermediation posed by the development of digital technology, which makes it possible for academics to take over the means of dissemination and publish their work for and by themselves cheaply and easily). In fact a 2009 JISC open science report identified ‘open-ness, predictive science based on massive data volumes and citizen involvement as [all] being important features of tomorrow’s research practice’.

In a further move in this direction, all seven PLoS journals are now providing a broad range of article level metrics and indicators relating to usage data on an open basis. No longer withheld as ‘trade secrets’, these metrics measure which articles are attracting the most views, citations from the scholarly literature, social bookmarks, coverage in the media, comments, responses, notes, ‘Star’ ratings, blog coverage, etc

PLoS has positioned this programme as enabling science scholars to assess ‘research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published’, and they encourage readers to carry out their own analyses of this open data. Yet it is difficult not to see article level metrics as also being part of the wider process of transforming knowledge and learning into ‘quantities of information’, as Lyotard puts it; quantities, furthermore, that are produced more to be exchanged, marketed and sold – for example, by individual academics to their departments, institutions, funders and governments in the form of indictors of ‘quality’ and ‘impact’ - than for their ‘“use-value’”. 

The requirement to have visibility, to show up in the metrics, to be measurable, nowadays encourages researchers to publish a lot and frequently. So much so that the peer-reviewed academic journal article has been positioned by some as having now assumed ‘a single central value, not that of bringing something new to the field but that of assessing the person’s research, with a view to hiring, promotion, funding, and, more and more, avoiding termination.’  In such circumstances, as Lyotard makes clear, ‘[i]t is not hard to visualize learning circulating along the same lines as money, instead of for its “educational” value or political (administrative, diplomatic, military) importance’. To the extent that it is even possible to say that, just as money has become a source of virtual value and speculation in the era of American-led neoliberal global finance capital, so too has education, research and publication. And we all know what happened when money became virtual.