Against the Computational Creep

In this short post I want to think about the limits of computation, not the limits theoretically of the application or theorisation of computation itself, but actually the limits to which computation within a particular context should be contained. This is necessarily a normative position, but what I am trying to explore is the limit at which computation, which can have great advantages to a process, institution or organisation, starts to undermine or corrode the way in which a group, institution or organisation is understood, functions or how it creates a shared set of meanings. Here though, I will limit myself to thinking about the theorisation of this, rather than its methodological implications, and how we might begin to develop a politics of computation that is able to test and articulate these limits and understand the development of a set of critical approaches which are also a politicisation of algorithms and of data.

By computational creep I am interested in the development of computation as a process rather than an outcome or thing (Ross 2017: 14). This notion of “creep” has been usefully identified by Ross in relation to extreme political movements that take place by what he calls “positive intermingling”.[1] I think that this is a useful way to think of the way in which computationalism, and here I do not merely mean the idea that consciousness in modelled on computation (e.g. see Golumbia 2009), but more broadly as a set of ideas and style of thought which argues that computational approaches are by their very nature superior to other ways of thinking and doing (Berry 2011, 2014). This is also related to the notion that anything that has not been “disrupted” by computation is, by definition, inferior in some sense, or is latent material awaiting its eventual disruption or reinvention through the application of computation.  I would like to argue that this process of computational creep takes six stages:

  1. Visionary-computational: Computation suggested as a solution to existing system or informal process. These discourses are articulated with very little critical attention to the detail of making computational systems or the problems they create. Usually, as Golumbia (2017) explains, these draw on metaphysics of information and computation that bear little relation to material reality of the eventual or existing computational systems. It is here, in particular, that the taken-for-grantness of the improvements of computation are uncritically deployed, usually with little resistance. 
  2. Proto-computational: One-off prototypes developed to create notional efficiencies, manage processes, or to ease reporting and aggregation of data. Often there is a discourse associated with the idea that this creates “new ways of seeing” that enable patterns to be identified which were previously missed. These systems often do not meet the required needs but these early failures, rather than taken as questioning the computational, serve to justify more computation, often more radically implemented with greater change being called for in relation to making the computational work. 
  3. Micro-computational: A wider justification for small scale projects to implement computational microsystems. These often are complemented by the discursive rationalisation of informal processes or the justification of these systems due to the greater insight they produce. This is where a decision has been taken to begin computational development, sometimes at a lightweight scale, but nonetheless, the language of computation both technically and as metaphor starts to be deployed more earnestly as justification. 
  4. Meso-computational: Medium-scale systems created which draw from or supplement the existing minimal computation already in process. This discourse is often manifest in multiple, sometime co-exisiting and incompatible computations, differing ways of thinking about algorithms as a solution to problems, and multiple and competing data acquisition and storage practices. At this stage the computational is beyond question, it is taken as a priori that a computational system is required, and where there are failures, more computation and more social change to facilitate it are demanded. 
  5. Macro-computational: Large-scale investment to manage what has become a complex informational and computational ecology. This discourse is often associated with attempts to create interoperability through mediating systems or provision for new interfaces for legacy computational systems. At this stage, computation is now seen as as source of innovation and disruption that rationalises the social processes and helps manage and control individuals. These are taken to be a good in and of themselves to avoid mistakes, bad behaviour, poor social outcomes or suchlike. The computational is now essentially metaphysical in its justificatory deployment and the suggestion that computation might be making things worse is usually met with derision. 
  6. Infra-computational: Calls for overhaul of and/or replacement of major components of the systems, perhaps with a platform, and the rationalisation of social practices through user interface design, hierarchical group controls over data, and centralised data stores. This discourse is often accompanied by large scale data tracking, monitoring and control over individual work and practices. This is where the the notion of top-view, that is the idea of management information systems (MIS), data analytics, large-scale Big Data pattern-matching and control through algorithmic intervention are often reinforced. In this phase a system of data requires free movement of the data through a system through an open definition (e.g. open data, open access, open knowledge), which allows standardisation and sharability of data entities, and therefore of further processing and softwarization. This phase often serves as an imaginary and is therefore not necessarily ever completed, its failures serving as further justification for new infrastructures and new systems to replace earlier failed versions. 

This line of thinking draws on the work of David Golumbia, particularly the notion of Matryoshka Dolls that he takes from the work of Phillip Mirowski. This is in relation to the notion of multiple levels or shells of ideas, that form a system of thinking, but which is itself not necessarily coherent as such, nor lacking in contradiction, particularly at different layers of the shells. This “Mirowski calls the ‘’Russian doll’ approach to the integration of research and praxis in the modern world'” (Golumbia 2017: 5). Golumbia makes links between this way of thinking about neoliberalism as a style of thinking that utilises this multi-layered aspect and technolibertarianism, but here I want to think about computational approaches more broadly, that is as instrumental rational techniques of organisation. In other words, I want to point to the way in which computation is implemented, usually in a small scale way, within an institutional context, and which acts as an entry-point for further rationalisation and computation. This early opening creates the opportunity for more intensive computation which is implicated in a bricolage fashion, that is that, at least initially, there is not a systematic attempt to replace an existing system, but over time, and with the addition to and accretion of computational partialities, calls become greater for the overhaul of what is now a tangled and somewhat contradictory series of micro-computationalisms, into a more broad computational system or platform. Eventually this leads to a macro- or infra-computational environment which can be described as functioning as algorithmic governmentality, but which remains ever unfinished with inconsistencies, bugs and irrationalities throughout the system (see Berns and Rouvroy 2013). The key point is that in all stages of computationally adapting an existing process, there are multiple overlapping and sometimes contradictory processes in operation, even in large-scale computation.

Here I think that Golumbia’s discussion of the “sacred myths among the digerati” is very important here, as it is this set of myths that are unquestioned especially early on in the development of a computational project. Especially at what I am calling the visionary-computational and proto-computational phases, but equally throughout the growth in computational penetration. Some of these myths include: claims of efficiency, the notion of cost savings, the idea of communications improvement, and the safeguarding corporate or group memory. In other words, before a computerisation project is started, these justifications are already being mobilised in order to justify it, without any critical attention to where these a priori claims originate and their likely truth content.

This use of computation is not just limited to standardised systems, of course, and by which I mean instrumental-rational systems that are converted from a paper-based process into a software-based process. Indeed, computation is increasingly being deployed in a cultural and sociological capacity, so for example to manage individuals and their psychological and physical well-being, to manage or shape culture through interventions and monitoring, and the capacity to work together, as teams and groups, and hence to shape particular kinds of subjectivity. Here there are questions more generally for automation and the creation of what we might call human-free technical systems, but also more generally for the conditions of possibility for what Bernard Stiegler calls the Automatic Society (Stiegler 2015). It is also related to the notion of digital and computational systems in areas not previously thought of as amenable to computation, for example in the humanities, as is represented by the growth of digital humanities (Berry 2012, Berry and Fagerjord 2017).

That is to say, that “the world of the digital is everywhere structured by these fictionalist equivocations over the meanings of central terms, equivocations that derive an enormous part of their power from the appearance that they refer to technological and so material and so metaphysical reality” (Golumbia 2017: 34). Of course, the reality is that these claims are often unexamined and uncritically accepted, even when they are corrosive in their implementations. Where these computationalisms are disseminated and their creep goes beyond social and cultural norms, it is right that we ask: how much computation can a particular social group or institution stand, and what should be the response to it? (See Berry 2014: 193 for a discussion in relation to democracy). It should certainly be the case that we must move beyond accepting a partial success of computation to imply that more computation is by necessity better. So by critiquing computational creep, through the notion of the structure of the Russian doll in relation to computational processes of justification and implementation, together with the metaphysical a priori claims for the superiority of computational systems, we are better able to develop a means of containment or algorithmic criticism. Thus through a critical theory that provides a ground for normative responses to the unchecked growth of computations across multiple aspects of our lives and society we can look to the possibilities of computation without seeing it as necessarily inevitable or deterministic of our social life (see Berry 2014).


[1] The title “Against the Computational Creep” is reference to the very compelling book Against the Fascist Creep by Alexander Reid Ross. The intention is not to make an equivalence between fascism and computation, rather I am more interested in the concept of the “creep” which Ross explains involves small scale, gradual use of particular techniques, the importation of ways of thinking or the use of a form of entryism. In this article, of course, the notion of the computational creep is therefore referring to the piecemeal use of computation, or the importation of computational practices and metaphors into a previously non-computational arena or sphere, and the resultant change in the ways of doing, ways of seeing and ways of being that this computational softwarization tends to produce. 


Berns, T. and Rouvroy, A. (2013) Gouvernementalité algorithmique et perspectives d’émancipation : le disparate comme condition d’individuation par la relation?, accessed 14/12/2016,

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave Macmillan.

Berry, D. M. (2012) Understanding Digital Humanities, Basingstoke: Palgrave.

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury

Berry, D. M. and Fagerjord, A. (2017) Digital Humanities: Knowledge and Critique in a Digital Age, Cambridge: Polity.

Golumbia, D. (2009) The Cultural Logic of Computation, Harvard University Press.

Golumbia, D. (2017) Mirowski as Critic of the Digital, boundary 2 symposium, “Neoliberalism, Its Ontology and Genealogy: The Work and Context of Philip Mirowski”, University of Pittsburgh, March 16-17, 2017

Stiegler, B. (2016) The Automatic Society, Cambridge: Polity.


Three Universities

Every university is in reality three universities. In this article, I am naming them: the university of truth, the university of learning, and the university of necessity. This collectively I am calling the Schemata of the university and the three imaginaries of the university pertain to partial experiences and to epistemological truths of the university which overlap and interact in important ways. The balance between these three realities is key to the success of a university qua university, but also to helping us understand how many of the modern proposals for reform of the university, by not taking these into account, serve to sever these worlds of the university apart, and more worryingly, act to privilege one world at the expense of the others.

Palimpsest: Codex Guelferbytanus B, 026, folio 194 verso

A metaphor that might be useful for thinking about this is that the university functions as a kind of palimpsest, multiply inscribed with differing conceptions of the university’s ends. A palimpsest is, of course, a manuscript or piece of writing material on which later writing has been superimposed on effaced earlier writing creating a multiply written document which previous versions are faintly visible through the newer writings.  Similarly, the university form contains within it, inscribed in buildings and practices, discourse and policies, multiple over-writings, that interact and coalesce in productive ways for university to be function as a university qua university. In this article, therefore, I want to examine how in theorising the university we can develop concepts adequate not only for defending the specificity of the university as a form, what I have called elsewhere universitality, but also in a positive sense for the future development and invention in relation to the university form by thinking through this palimpsest metaphor for understanding the ends of the university (see Berry 2017).

These differing imaginaries of the university, as necessity, as truth and as learning, must be overlaid and mutually respectful of each others presence for the university as an institution to function. That is, that the reality of the university is constantly informed by, and shaped through the interaction with these multiple possibilities of the university. Here, of course, I am gesturing to the notion of the Idea of a University, and seeking to develop and deepen this notion (Newman 1996). Conflicts between these imaginaries is common, indeed productive of the university as an institution, but these conflicts should nonetheless be balanced as the overall institution should be manifest in cooperation at a higher level (Rothblatt 1972). What we might call the sublation of these imaginaries, and the institutional processes that enable these conflicts both to be staged and performed, without thereby undermining the whole, is a crucial element of ensuring the vitality of the capillary structures of the university, but also that in the moment of sublation, all elements, all forces that represent these imaginaries feel represented but also that their contribution makes the university possible in the first place. When these imaginaries break out into open conflict, or cannot sublate their imaginaries, or even worse, one imaginary seeks to liquidate another or seek hegemony in the university, then disaster is at hand.

I would like to now outline a tentative mapping of these imaginaries and start to think about how they interoperate and structure the possibilities that are opened up in the university.

university of truth

The university of truth is usually populated by academic staff, scholars and researchers. It is vital that the university should connect to and capture the imagination and hearts and minds of the general public in order that it can provide a benefit to the nation in which it is located. But this imagination is represented not in pandering to the fads and fashions of a purported population, nor to the whims of government policy, but rather in the search for truth (or the principles for the validation of truth) and the application of critical reason. This requires that the university of truth is constantly on the search for academic brilliance and outstanding thought in whatever way it is represented, in order that the university should continue pushing at the boundaries of thought and knowledge. This requires constant vigilance by the university of truth, and is its most important function and mission, and one which must not be distracted by other issues. The university of truth is, in most modern universities, lacking institutional representation and manifests as an invisible college. However, its lack of institutional materiality must not be allowed to disqualify or undermine this crucial role that it plays in keeping the university eye’s focussed on the horizon of knowledge (Derrida 2004). No sacrifice is too great for the university of truth to secure academics of great repute and merit and conspicuous ability must be nurtured and developed within the domain of this university.

university of learning.

The university of learning is usually populated by teaching staff, students, and teaching fellows. The university of learning has a crucial function as a dissemination point for knowledge, passing and continuing the state of the field across the generations. This results in a two-fold outcome, providing the capacities for critical reason, through engagement and contestation of knowledge claims and truths, but also furthering and deepening the individual and economic independence of the learner so that on leaving the university they are more knowledgeable, critical and self-confident in their capacity to live in the world than when they entered the university. The passage through the university is then a pathway creating through complex knowledge fields, and inoculating the individual against irrationality and virulent forms of populism. The ability to think for oneself, to dare to know (sapere aude!), is a benchmark against which success in this university should be judged.

university of necessity

The university of necessity usually consists of the management staff, the administrative and and professional staff of the university. The university of necessity is tasked with providing the conditions which will create the facilities for the university of truth and the university of learning. The university of necessity is named for its role in stabilising and managing the necessities for a university, such that these should not become the concern of the university of truth or the university of learning. The university of necessity, by virtue of its privileged role in managing and controlling the flows of funds and the structures of the university has a strong duty of care towards the university of truth and the university of learning. In neoliberal models of the university, the university of necessity becomes the university of control, or of excellence, undermine the other spheres and weakening the institution and its mission (Readings 1996). 
This early attempt to draft this schematic is necessarily explorative but through thinking through the conflicting pushes and pulls of the university in its institutional form we can, through conceptual invention, help to energise and revivify the university as respectively a university of truth, learning and necessity.  


Berry, D. M. (2017) Towards an Idea of Universitality, stunlaw

Derrida, J. (2004) Mochlos; or, The Conflict of the Faculties, in The Eyes of the University, Stanford University Press.

Newman, J. H. (1996) The Idea of a University, Yale University Press.

Readings, B. (1996) The University in Ruins, London: Harvard University Press.

Rothblatt, S. (1972) The Modern University and its Discontents: The Fate of Newman’s Legacies in Britain and America, Cambridge: Cambridge University Press.

The Uses of Open Access

It is increasingly clear that the university is undergoing rapid change in higher education systems right across the globe. This is partly due to the forces of digital technology, partly due to neoliberal restructuring of the higher education sector by governments, and partly due to a shift in student demographics, expectations and a new consumerist orientation. However, there is an additional pressure on universities, and an illogical one at that, which is the claim that they do not contribute to the public good through their practices of publication. This claim has more recently come from open access (OA) advocates, but also increasingly from governments that seek to use university research as a stimulus to economic growth. This claim is without foundation and unhistorical but it is a claim that is being made with greater stridency and is being taken up by research funders and university managements as an accurate state of affairs that they seek to remedy through new policies and practices related to academic publication. It is time, as Allington has convincingly argued, that we ask “what’s [OA] for? What did [OA’s] advocates… think it was going to facilitate? And now that it’s become mainstream, does it look as if it’s going to facilitate that thing we had in mind, or something else entirely?” (Allington 2013).

In this article I want to start to explore some of the major themes that I think need to be addressed in the current push towards open access but also how it serves as a useful exemplar of the range of “innovations” being forced on the university sector. With such a large subject I can only gesture to some of the key issues here, but my aim is to start to unpick some of the more concerning claims of open access advocates and question why their interests, government proposals and university management are too often oriented in the same direction. I want to suggest that this is not accidental, and actually reflects an underlying desire to “disrupt” the academy which will have dire implications for academic labour, thought and freedom if it is not contested. Whilst it is clear some open access advocates believe that their work will contribute to and further the public good, without an urgent critique of the rapidity and acceleration of these practices, the university, as it has been historically constituted through the independent work of scholars, will be undermined and the modern university as we have come to understand it may be transformed into a very different kind of institution.[1]

Within this new complex landscape of the university, there is has been the remarkable take up and acceleration of the notion of mandated Open Access (OA). Open Access is the use of copyright licenses to make textual materials available to others for use and reuse through a mechanism similar to that which was created by the Free Software Foundation as the GNU Public Licence (GPL) and later through the activities of the open access movement and the Creative Commons organisation. The FLOSS (Free Libre and Open Source Software) movement and the Creative Commons have been important in generating new ways of thinking about copyright, but also in generating spaces for the construction of new technologies and cultural remixes, particularly the GNU GPL licence and the Creative Commons Share-Alike licence (Berry 2008). Nonetheless, these new forms of production around copyright licenses have not been free of politics, and often carry with them cyberlibertarian notions about how knowledge should be treated, how society should be structured, and the status of the individual in a digital age (see Berry 2008). These links between the ways of thinking shared between open source and open access raise particular concerns. As Golumbia has cautioned, “in general, it is the fervor for OA… especially as expressed in the idea that OA should be mandated at either the institutional or governmental level… [that] seems far more informed by destructive intent and ideology toward certain existing institutions and practices than its most strident advocates appear to recognize, even as they openly recommend that destruction” (Golumbia 2016: 76).

However, it is important to note at this point that I agree with Golumbia that,

this does not mean that OA is uniformly a bad idea: it is not. In many ways it is, very clearly, a good idea. In particular, versions of voluntary “green” OA, where researchers may or may not deposit copies of their works wherever and under whatever conditions they choose, and the voluntary creation of OA journals when not accompanied by pressure, institutional or social, to refrain from publishing in non-OA journals, strike me as welcome… But it is a good idea that has been taken far beyond the weight that the arguments for it can bear, and frequently fails to take into account matters that must be of fundamental concern to any left politics. Further, it is a good idea that is surrounded by a host of ideas that are nowhere near as good, and that fit too easily into the general rightist attack on higher education, especially in the humanities, that operates worldwide today (Golumbia 2016: 76).

To examine these issues, first I want to briefly explore the new political economic reality that has been facing the university in the late 20th and early 21st century. Indeed, we have seen these changes mapped out in a number of important recent publications about the UK and USA university systems (see for example, Collini 2012, 2017; Holmwood 2011; Readings 1996). Under this new regime, it is argued that the student is cast as consumer, and the academic is recast as an academic entrepreneur who must constantly seek to make “impact” through activities that lead to an outcome that can be quantified (Biswas and Kirchherr 2016). Finlayson and Hayward (2012) have argued that in changing the university, “four different rationales have been put forward by successive administrations or their appointed advisors for these reforms: 1. Expansion, 2. Efficiency, 3. Economic accountability – i.e. value for money, 4. Political accountability – i.e. democratisation or widening participation”.  These are demonstrated most clearly in the notion of “impact”. Stefan Collini, for example, describes how in the REF (Research Excellence Framework) consultation document 37 different “impact indicators” are outlined for assessing the university sector, most of which serve to promote economic or utilitarian interests,

nearly all of these refer to “creating new businesses”, “commercialising new products or processes”, attracting “R&D investment from global business”, informing “public policy-making” or improving “public services”, improving “patient care or health outcomes”, and improving “social welfare, social cohesion or national security” (a particularly bizarre grouping). Only five of the bullet points are grouped under the heading “Cultural enrichment”. These include such things as “increased levels of public engagement with science and research (for example, as measured by surveys)” and “changes to public attitudes to science (for example, as measured by surveys)”. The final bullet point is headed “Other quality of life benefits”: in this case, uniquely, no examples are provided. The one line under this heading simply says “Please suggest what might also be included in this list” (quoted in Finlayson and Hayward 2012).

Indeed, more recently Collini (2017) has described the events leading up to the emergence of what has come to be called the “impact agenda”. This is the idea that research should be shown to be socially beneficial and economically useful. Collini describes how Gordon Brown, then at the Treasury, was being lobbied by businesses who sought to change the incentives of the universities towards short-term, preferably commercial, impact-led innovation. This led to “impact” being added to the research assessment process of the REF, which many have argued, deliberately shifts how the university understands itself as an institution.

Similarly, in the “2003 White Paper and the 2007 Annual Review of the Science and Innovation Investment Framework that, in spite of one or two passing remarks about the value of education, the Government’s overriding concern is to harness and increase the economic impact of research… All the government reviews, papers and reports in the period are about how to make Higher Education serve the needs of the knowledge economy” (Finlayson and Hayward 2012). These kinds of claims and arguments are often related to the notion of the emergence of an information society, usually understood as a shift in Western economies from the production of goods to the production of innovation (see Berry 2008: 4). This is related to a similar notion of a knowledge-based economy which is built on the condition that there is knowledge, information and data freely flowing around that economy, and is structured in such a way as to allow exchange, aggregation, reuse and transformation, preferably with minimal forms of friction. Geert Lovink captures this well when he says that Google’s mantra is “let others do the work first that we won’t pay for. You write the book, we scan it and put our ads next to it” (Lovink 2016: 169). As Greenspan argued in 1996,

the world of 1948 was vastly different from the world of 1996. The American economy, more then than now, was viewed as the ultimate in technology and productivity in virtually all fields of economic endeavor [sic]. The quintessential model of industrial might in those days was the array of vast, smoke-encased integrated steel mills in the Pittsburgh district and on the shores of Lake Michigan. Output was things, big physical things. Virtually unimaginable a half-century ago was the extent to which concepts and ideas would substitute for physical resources and human brawn in the production of goods and services (Alan Greenspan, quoted in Perelman 2003).

Clive Humby has described a kind of process where “data is the new oil… Data is just like crude. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value” (Palmer 2006). Or as Wired put it, “like oil, for those who see data’s fundamental value and learn to extract and use it there will be huge rewards. We’re in a digital economy where data is more valuable than ever. It’s the key to the smooth functionality of everything from the government to local companies. Without it, progress would halt” (Toonders 2014). So this extractive metaphor, which is rich in illustrative description but which is limited for describing the process of creating, maintaining and using research, has nonetheless served to inspire governmental policy in numerous ways. For example, Meglena Kuneva, European Consumer Commissioner at the European Commission, has described personal data as “the new oil of the internet and the new currency of the digital world” (Kuneva 2009). Indeed, Hinssen 2012 uses the notion that “information is the new oil” and that we should be “drilling new sources of innovation”. Innovation in this sense, usually means changing or creating more effective processes, products and ideas for commercial exploitation. Naturally, the next step has been to connect the notion of data (or “open data” as it has been termed) to this extractive metaphor. Indeed, the Office for National Statistics (a producer of data sets) has argued that “if data is the new oil, Open Data is the oil that fuels society and we need all hands at the pump” (Davidson 2016). What makes data into open data, is that it is free of intellectual property restrictions that prevent it from being used by others by publishing constraints, such as copyright, and that it is machine readable. Open data, like open access publications and open source before them, relies on copyright licenses to grant the user the right to dice up and remix the textual or other digital materials in ways that can create new forms of innovative products. Under these conditions open access type works can be collected into a computer processable corpus to subject to pattern matching algorithms, Big Data analysis, free content to populate silicon valley apps and services, and other processing to make the “oil” into economic products.

In this sense, the knowledge economy is built on a contradictory set of principles, property rights to control intellectual products and processes (including digital rights management), and a mechanism to promote the “free” or “open” circulation of data and information. This contradiction is resolved if one understand them as not mutually antagonistic to each other, but rather as differing spheres or layers of the knowledge economy. With free data and information at the bottom, waiting to be exploited by entrepreneurs, and a thriving ecosystem of corporations living on top of this land. Indeed, within the academic literature and in governmental publications there is a tacit notion that the government, government-funded research and historical cultural materials (usually out-of-copyright but not digitised and sitting in archives) should become the freely available knowledge in a form that can be “mined” by the private sector in order to create economic growth. But to fully realise this vision requires that much more of the information and knowledge generated by, for example, universities and archives, will need to be opened up for innovation. This opening up, quite literally means providing in a digital form their materials without the kinds of copyright protections that have historically provided the stimulus for research publication, and which would be handed over to the private sector gratis. Indeed, these private sector corporations are driven by very different norms to the research university and certainly do not share its ethical commitment towards science and knowledge. Rather “the norms that guide how companies like… Google organise and disseminate knowledge are primarily market based and have little in common with the formative practices and intellectual virtues that constitute the core of the research university” (Wellmon 2015: 272).

One of the most influential descriptions of the workings of innovation has been the notion of “disruptive innovation” a term developed by Christensen (1997) and which describes a process by “which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves up market, eventually displacing established competitors” (Christensen 2017). This notion of disruptive innovation is now very much part of silicon valley ideology, and has become part of a discourse that has led calls for “disruption” in other sectors of the economy, from taxis, hotel, deliveries and education. Disruption theory has been connected with new ways of doing things that disturb, interrupt or cause to be restructure what is perceived to be a “closed” way of doing things, whether that through unionisation, monopoly or oligopoly behaviour, public sector or educational. Indeed, in regard to the possible disruption of the university sector, the Economist was keen to argue that technology “innovation is eliminating those constraints [of existing universities]… and bringing sweeping change to higher education” (Economist 2014). What is striking is how this notion of disruptive innovation is increasingly being mobilised in relation to the university, from government, industry and also from university management itself. But also how, often uncritically, many open access advocates are echoing the need for a disruptive innovation of university publication practices (see Mourik Broekman et al 2015).

This is an example of a how disruptive innovation in relation to the university sector has created the conditions under which university research outputs have been re-articulated as “not open”. There has subsequently been an attempt to argue that they must be “opened up” and as such they would be a resource which can be made available to others to contribute to innovation and economic growth. One of the most important examples is the Finch report of 2012, commissioned by the UK Government. This report drew on many of these themes where it made an explicit link between economic growth and the access to and use of publicly funded research. It argued, “most people outside the HE sector and large research-intensive companies – in public services, in the voluntary sector, in business and the professions, and members of the public at large – have yet to see the benefits that the online environment could bring in providing access to research and its results”. These innovations, it argues, are prevented because of “barriers to access – particularly when the research is publicly-funded – [and] are increasingly unacceptable in an online world: for such barriers restrict the innovation, growth and other benefits which can flow from research” (Finch 2012).

The “barriers to access” the report counter-intuitively identifies, are practices of publishing research in the public sphere in a form which has been enormously successful in transforming our societies over the last 350 years. Indeed, it is as if universities had, by publishing materials over this period, been actively seeking to create a closed system, rather than, as was actually the case, contributing to Enlightenment notions of a Republic of Letters and open science. Indeed, as Bernard Stiegler has pointed out, “every student who enrols in the final year of school [in France] is expected to know that the Republic of Letters was conditioned by the publishing revolution from which sprang gazettes and then newspapers, and that the philosophy of the Enlightenment that inspired the French Revolution itself emerged from this Republic of Letters” (Stiegler 2016: 235). Golumbia (2016: 77) similarly has observed that there is a real problem with open access advocates’ arguments that “what we have until the last decade or two called ‘publication’ somehow restricts access to information, rather than making [that] information more available”. These claims are not made more believable by the OA habit of picking one or two major journal publishers who have especially problematic practices of publication pricing strategies. This partial representation of the wider landscape of publishing and the use of selective, and often very emotive, cases to argue that all academic publication is against the public good, is damaging to academia as a whole as well as unsubstantiated. This aspect of proselytising of the virtues of open access without any concerns for its potential dangers is very reminiscent of the intense argumentation that has taken place with the FLOSS movement where similar zealotry has been observed (see Berry 2008).

Open access advocates often claim an alignment between open access and democratisation, participation and the public good, but to me this is only part of the story about why open access is now being promoted by government. Indeed, if one was under any confusion about why open access might be useful, Finch has helpfully laid this out,

support for open access publication should be accompanied by policies to minimise restrictions on the rights of use and re-use, especially for non-commercial purposes, and on the ability to use the latest tools and services to organise and manipulate text and other content

[government should seek to] extend the range of open access and hybrid journals, with minimal if any restrictions on rights of use and re-use for non-commercial purposes; and ensure that the metadata relating makes clear articles are accessible on open access terms.

It goes without saying that these moves seek to ensure that “innovative” products can be refined from research outputs that have no restrictions on their extraction, use, and exploitation. Finch also uncritically argues that universities should fund, in combination with research councils and government, research that could later be used free of restrictions by commercial users, without themselves contributing back into this open access repository. In effect, Finch is arguing for greater public subsidy for the private sector’s use of university research outputs. Indeed, the range of information from universities that Finch saw as available for exploitation includes “research publications…reports, working papers and other grey literature, as well as theses and dissertations… publications and associated research data” (Finch 2012).[2] Not only is Finch generalising the case for open access to all forms of output from university and related research institutions, she is also eager to assume that students’ MA dissertations and PhD theses are also fair game for commercial exploitation, without consideration of the ethical or legal implications of mining student work without their permission or consent. As Stiegler has argued, “the logic of the free and open (free software and hardware, open source, open science, open data, and so on), while initially conceived in order to struggle against the privatisation of knowledge and the plundering of those who possessed it, was able to be turned against the latter” and into a new form of proletarianisation (Stiegler 2016: 240).[3] Google and other companies have “touted their services as ‘free’ and available to all, but these companies are under pressure to return a profit to their investors” (Wellmon 2015: 272).

Open Access is too often presented as an unquestioned good, especially by its more zealous advocates (for a useful critique of this, see Golumbia 2016).[4] Following the push for mandating journal articles as open access across the UK higher education sector, for example, there is now a developing discourse of open access for monographs which tends to uncritically accept OA’s “progressive” benefits (see Crossick 2015). Indeed, this has now been confirmed as part of the UK REF for 2027 and this will be a major change to the way academics publish long form academic work, but also impact their control over their academic writings in book-form and represents a major change to academic practice. Although few authors makes much money from their monographs, books still nonetheless represent an independent income stream disconnected from their employers and helped to support and reinforce academic freedom. It should be noted that this is a proposal very much encouraged by university management, and has not been subject to sufficient critical attention by the academic community who are often distracted by the claims to “democracy” or “public culture” that open access is linked to. Indeed, as Fuller has argued, “public access to academic publications in their normal form is merely a pseudo-benefit, given that most people would not know what to make of them (Fuller 2016).

In this short article I have sought to contribute to work that problematises open access ideas and places them within their specific historical location. By drawing links between government policies that have sought to reorient the university from its historical mission related to research and understanding to that of economic growth and impact, one begins to see a new alignment of power and knowledge. Open access appears at a time when digital technologies are changing the contours of the dissemination of knowledge and information and are also challenging the publishing industry with new means of publication. Therefore “granting companies… the authority to distribute, even as platforms and not necessarily owners, university-produced knowledge could cede control over the dissemination and organisation of knowledge to institutions primarily oriented to profit-making” (Wellmon 2015: 272). Indeed, OA cannot be understood without seeing it within this wider historical constellation, and consequently its advocates’ attempts to depoliticise it by placing it within a moral category, that is, as an obvious good, is extremely concerning and needs urgent critique. Additionally, as Fuller argues, “much of the moral suasion of the open access movement would be dissipated if it complained not only about the price of academic journals but also the elite character of the peer-review process itself… in effect open access is making research cheaper to those who already possess the skills to [use it]…” (Fuller 2016). Open access raises important questions about how publications can better reach publics and audiences, but by exaggerating its advantages and dismissing its disadvantages, it becomes ideological and therefore unreflexive about its uses in the current restructuring of the university and knowledge in the 21st century.


[1] Rockhill (2017) has written about how these changes in the university diminish the range of critical voices that historically were found in the academy. Indeed, he suggests that they “should invite us to think critically about the current academic situation in the Anglophone world and beyond, [for example]… the ways in which the precarization of academic labor contributes to the demolition of radical leftism. If strong leftists cannot secure the material means necessary to carry out our work, or if we are more or less subtly forced to conform in order to find employment, publish our writings or have an audience, then the structural conditions for a resolute leftist community are weakened”.  Similarly, Golumbia has argued that “depriving professors of the opportunity to earn money for their own creative and scholarly productions is one of the best ways to eviscerate what is left of the professiorate” (Golumbia 2013). 
[2] Finch argued further and completely bizarrely that “we therefore expect market competition to intensify, and that universities and funders should be able to use their power as purchasers to bear down on the costs to them both of APCs and of subscriptions” (Finch 2012). The idea that a smaller number of academic purchasers would drive down prices by paying for production rather than consumption of research publications was presented with no evidence except for the self-evidence of the claim. 
[3] Stiegler also quotes that “Catherine Fisk, a lawyer, has gone through old trials in the US in which employers and employees confronted each other over the ownership of ideas. In the early 19th century, courts tended to uphold the customary right of works to freely make use of knowledge gained at the workplace, and attempts by employers to claim the mental faculties of trained white workers were rejected by courts because this resembled slavery too closely. As the knowhow of workers became codified and the balance of power shifted, courts began to vindicate the property claims of employers” (Stiegler 2016: 240).
[4] Andrew Orlowski has argued a similar point in relation to free culture advocates in cultural production, “unfortunately for the creative industries, there’s money and prestige to be gained from promoting this baffling child-like view [that the creative economy exists to deprive people of publicly owned goods]. The funds that cascade down from Soros’ Open Society Initiative into campaigns like A2K, or from the EU into NGOs like Consumer International, or even from UK taxpayers into quangos like Consumer Focus, all perpetuate the myth that there’s a ‘balance’: that we’ll be richer if creators are poorer, we’ll have a more-free society if we have fewer individual rights, and that in the long-term, destroying rewards for creators is both desirable and ‘sustainable’” (Orlowski 2012).


Allington, D. (2013) On open access, and why it’s not the answer,

Berry, D. M. (2008) Copy, Rip, Burn: The Politics of Copyleft and Open Source, London: Pluto Press.

Biswas,A. and  Kirchherr,J. (2016) The Tough Life of an Academic Entrepreneur: Innovative commercial and non-commercial ventures must be encouraged, LSE Blog,

Mourik Broekman, P., Hall, G., Byfield, T. Hides, S. and Worthington, S. (2015) Open Education: A Study in Disruption, London: Rowman and Littlefield.

Collini, S. (2012) What Are Universities For?, London: Penguin.

Collini, S. (2017) Speaking of Universities, London: Verso.

Christensen, C. M. (1997), The innovator’s dilemma: when new technologies cause great firms to fail, Boston, Massachusetts, USA: Harvard Business School Press
Christensen, C. M. (2017) Disruptive Innovation,
Crossick, G. (2015) Monographs and Open Access: A report to HEFCE, HEFCE,and,open,access/2014_monographs.pdfDavidson, R. (2016) Open Data is the new oil that fuels society, Office for National Statistics,

Economist (2014) Massive open online forces, The Economist

Finch (2012) Accessibility, sustainability, excellence: how to expand access to research publications, Report of the Working Group on Expanding Access to Published Research Findings,

Finlayson, G. and Hayward, D. (2012) Education towards heteronomy: a critical analysis of the reform of UK universities since 1978, libcom.org
Fuller, S. (2016) Academic Caesar, London: Palgrave Macmillan.

Golumbia, D. (2013) On Allington on Open Access, uncomputing,

Golumbia, D. (2016). Marxism and Open Access in the Humanities: Turning Academic Labor against Itself, Workplace, 28, 74-114.
Hinssen, P. (2012) (Ed.) Information is the New Oil: Drilling New Sources of Innovation,, J. (2011) (Ed.) A Manifesto for the Public University, London: Bloomsbury Academic.
Kuneva, M. (2009) Keynote Speech, Roundtable on Online Data Collection, Targeting and Profiling,, G. (2016) Social Media Abyss: Critical Internet Cultures and the Forces of Negation, Cambridge: Polity.

Orlowski, A. (2012) Popper, Soros, and Pseudo-Masochism,

Palmer, M. (2006) Data is the New Oil,

Perelman, M. (2003) ‘The Political Economy of Intellectual Property’, Monthly Review 54 (8): 29–37.

Readings, B. (1996) The University in Ruins, London: Harvard University Press.
Rockhill, G. (2017) The CIA Reads French Theory: On the Intellectual Labor of Dismantling the Cultural Left, The Philosophical Salon, B. (2016) The Automatic Society: The Future of Work, Cambridge: Polity.

Toonders, Y. (2014) Data is the New Oil of the Digital Economy, Wired,

Wellmon, C. (2015) Organizing Enlightenment: Information overload and the Invention of the Modern Research University, Baltimore: John Hopkins University Press.

Towards an Idea of Universitality

Ruins of Plato’s Academy

What would it mean to reclaim the university from its ruins? To revisit what were considered the fundamental conditions of the social epistemology of the university without falling into the trap of nostalgia and traditionalism? In this short post I want to think about what might be the content of a notion of what I am calling universitality, understood precisely as the conceptualisation of a constellation of thought and practice manifested through multiple histories, practices, institutions and bodies related to the idea of a university (Readings 1996; Rothblatt 1972; Thelin 2011, Whyte 2016). By means of a set of hodos, I intend to examine the notion of a university, with a view to developing a new conceptualisation in response to the contemporary crisis of the university, but also in terms of the crisis of its epistemology in and through the university in its modern corporate form.

This is to rethink the university in light of the more recent challenge to universities and collegiality. To turn a critical eye over the return of a philosophy of utility which hangs over the fate of universities in the 21st century and which dates back to before the founding of the University of London (Collini 2012; Holmwood 2011). Not to say, of course, that this is necessarily a new threat to the university (Newman 1996; Shils 1972). Indeed, the history of the university has also been a history of thought against power, reason against utility, until in the 20th and 21st century thought and reason become themselves instrumentalised in the service of a project of economism driven in part by computationalism and neoliberalism. But what I explicitly seek to do in this article, in contrast to Collini (2017: 24), is to “propose some ideal or essence, some way of distinguishing supposedly ‘real’ universities from institutions that do not deserve the name”. In other words, by making a cut, which here Collini (2017) is reluctant to do, one develops the means of describing and classifying what we might call the university-ness of a university. This is, by its nature an exercise in genealogy as much as description, but it is also about recovering an idea of the university that seems to be all but forgotten, and without which we struggle to articulate a sense of an idea of what a university is for.

I draw the notion of universitality from the Latin Universitatis, in the particular sense of Studium Generale (understood as a place where students came to study) and more particularly as Magistrorum et Discipulorum (e.g. master and scholars, where scholars here means students). A universitatis is a form of organisation that can own and control a groups’ property in common for its members and which has a set of rules and regulations which the masters and students must conform to be accepted into the guild. Indeed, this is the etymological source of the notion of ‘university’ which was originally formed of a corporate body formed of masters and students acting as a legal person. The universitatis typically exists where a resource is too large for a single member to administer or to provide temporal security beyond individuals’ lifetimes. These corporate bodies, subsequently granted by a royal decree, were similar to municipalities or guilds which would often own property, such as racetracks and theatres. What is important to note here is that the term university, is not drawn from universal or general knowledge, but rather from the generality of the people who can study within the universitas. The idea was that a universitas could be joined by anyone capable of profiting from being there, that is without distinction of class, age, rank or previous occupation. So the universitas was understood as a specific form of corporation or society, hence the notion of members of the society being identified as socii (e.g. Fellows, a term still used at Oxford and Cambridge, and elsewhere for visiting academics). In this understanding of the university only the Fellows are essential to the university, and they are tasked with the search after knowledge, to advance knowledge and to possess knowledge for themselves. Indeed, historically, the role of the university has been closely associated with the production of knowledge, right up to present times. But universitatis also created the conditions for particular epistemologies and particular ways of seeing.

Here are clues to the first aspect of universitality, the notion that those who make up its core are a community of associates, the masters, dedicated to the advancement of knowledge, understanding and learning. Built around this core group, are the structures of the buildings, the libraries, and the scholars or students who are instructed, trained, educated but also tested, licensed, and qualified for competence by the masters. This is also the basis for the assertion by Kant that the university is ruled by an idea of reason emerging from philosophy, in other words, with infinity (see Derrida 2004: 83-112). Kant outlined this argument of the nature of the university in 1798, under the notion of The Conflict of the Faculties. He argued that all of the university’s activities should be organised through a single regulatory idea – the concept of reason. Kant argued that reason and the state, knowledge and power, could be unified in the university by the production of individuals capable of rational thought and republican politics – the students trained for the civil service and society. This is the beginning of the modern notion of a university and with it the development of both objective and subjective attempts to shape knowledge and learning towards the needs of modernity and its complex society. With this we see the development of the second aspect of the concept of universitality, the idea that a specific social epistemology of a scholarly community is regulated by the notion of reason.


Collini, S. (2012) What Are Universities For?, London: Penguin.

Collini, S. (2017) Speaking of Universities, London: Verso.

Derrida, J. (2004) Mochlos; or, The Conflict of the Faculties, in The Eyes of the University, Stanford University Press.

Holmwood, J. (2011) (Ed.) A Manifesto for the Public University, London: Bloomsbury Academic.

Kant, I. (1991) The Conflict of the Faculties, in Kant, I., Kant: Political Writings, Cambridge University Press.

Newman, J. H. (1996) The Idea of a University, Yale University Press.

Readings, B. (1996) The University in Ruins, London: Harvard University Press.

Rothblatt, S. (1972) The Modern University and its Discontents: The Fate of Newman’s Legacies in Britain and America, Cambridge: Cambridge University Press.

Shils, E. (1972) Intellectuals and the Powers and Other Essays, The University of Chicago Press.

Thelin, J. R. (2011) A History of American Higher Education, Baltimore: John Hopkins University Press.

Whyte, W. (2016) Redbrick: A Social and Architectural History of Britain’s Civic Universities, Oxford: Oxford University Press.

Prince Rupert’s Drop

A Prince Rupert’s Drop

Prince Rupert’s Drops offer very suggestive metaphors for the state of a society in a moment of both extreme resilience and potential fragmentation. The drops appeared in England during the seventeenth century immediately after the Restoration of the English monarchy. Charles II was interested in the new sciences that were emerging and was familiar with many of the scientific controversies of the day, Prince Rupert too was fascinated by new scientific discoveries and curiosities. Indeed, in 1660, the Royal Society was granted a royal charter by King Charles II and whose scientific activities caused great interest across English society. For example, Samuel Pepys mentions the drops in his diary of 13 January 1662, as “chymicall glasses, which break all to dust by breaking off a little small end; which is a great mystery to me” (Pepys 1662).

The Prince Rupert’s Drop (Lacrymae Vitreae) is a scientific oddity, originally thought to have emerged when glass was melted and perhaps accidentally released into water, which was always found near the furnaces for glass-blowers (Beckmann et al 1846: 241). As Brodsley et al explain, a sense of mystery permeates the history of Prince Rupert’s Drops, which they link to the fact that since the time of Emperor Tiberius glass-blowers might well have had a taboo about mentioning information about glass to outsiders as Tiberius had ordered the inventor of toughened glass to be put to death to prevent it being communicated to others. When molten glass is dropped into water in a particular way, it forms what look like glass tears, or tadpole shaped glass beads. The beads themselves, together with their tails of glass, have some extremely odd properties, which when they came to the attention of experimental philosophers in the middle of the seventeenth century caused much excitement.

The ‘bubbles’-the solid ones, at least-were what were later to called ‘Prince Rupert’s drops’. (Those said to contain ‘liquor’ could have been something different, but were probably the same containing vacuoles and no actual liquid.) These objects, glass beads with the form of a tear- tapering to a fine tail, made (though that was not generally known at the time by dripping molten glass into cold water, exhibited a paradoxical combination of strength and fragility not without interest to the materials scientist of the present day, and which could not fail to excite the imagination of natural (and not so natural) philosphers of the I7th century. The head withstands hammering on an anvil, or, as a more modern test, squeezing in a vice indenting its steel jaws, without fracture: yet breaking the tail with finger pressure caused the whole to explode into powder (Brodsley 1986: 1).

Prince Rupert

Beckmann et al explain that the beginning of the scientific examination of these glass drops is somewhat clouded in history, with evidence that they were made in The Netherlands in 1656 (and hence their other name as Dutch drops), and displayed in Paris and other cities to much interest (although some thought they had emerged from Sweden rather than The Netherlands). Brodsley dates their earliest date to the Mecklenburg glass-houses before 1625 (Brodsley 1986: 5-6). Nonetheless, their name became associated with Prince Rupert who bought them to England as gifts for Charles II, and which were given to the Royal Society. Prince Rupert returned to England from Germany in 1660 to join with Charles II after the Restoration in 1660. They were experimented with at the Royal Society in 1661 and examined by Robert Hooke (1665) and Thomas Hobbes (1662). According to the minutes of the Royal Society, “the King sent by Paul Neile five little glass bubbles, two with liquor in them, and the three solid, in order to have the judgement of the society concerning them” (Brodsley 1986: 1). In 1663 even Samuel Butler referred to them in his poem, Hudibras,

Honour is like that glassy bubble
That finds philosphers such trouble,
Whose least part crack’d, the whole does fly
And wits are crack’d, to find out why.

The way in which the drop’s head is formed into a state of tensile stress creates a remarkably strong material surface, which cannot be cracked with a hammer, or even with a bullet.[1] However, should the tail be given the slightest crack, the entire structure disintegrates into an explosion of glass due to the high potential energy stored through the tail. It was not until A. A. Griffith’s research in 1920 that,

the qualitative ideas of the strengthening effect of compressive stress could be given a detailed and mathematical formulation. In Griffith’s theory, the fracture of a brittle substance, such as glass, is initiated from pre-existing microcracks, which can grow larger, extracting enough elastic energy from their surroundings to pay for the energy of the increased area of free surface only if the stress around them is tensile and the product of the stress and the square root of the crack diameter exceeds a critical value dependent on basic properties of the material. (Brodsley 1986: 2).

This curious way of fragmentation and explosion of the drop from a tiny crack in the tail, but the incredible resilience in the head of the drop led to many analogies being formed from its example. For example, as early as 1671, Geminiano Montanari who sent a paper to the Royal Society on the subject of the Prince Rupert Drop, concluded his paper saying “so is a kingdom one and strong but when the top is broken shivers into men” (McManus 2014). Similarly, in 1851, in his Address to the Citizens of Concord on the Fugitive Slave Law, Waldo Emerson remarked that Daniel Webster, a Massachusetts senator elected by the Whig party, thought,

that the American Union is a huge Prince Rupert’s Drop, which, if so much as the smallest end be shivered off, the whole will snap into atoms. Now, the fact is quite different from this. The people are loyal, law-abiding. They prefer order, and have no taste for misrule and uproar (Emerson 1851: 182).

Indeed, Freud in Group Psychology and the Analysis of the Ego (1921) contemplated “the loss of the leader in some sense or other, the birth of misgivings about him, brings on the outbreak of panic, though the danger remains the same; the mutual ties between the members of the group disappear, as a rule, at the same time as the tie with their leader. The group vanishes in dust, like a Prince Rupert’s drop when its tail is broken off” (Freud 1921).

At a time when there appears to be a rise in authoritarianism and popularism, and the strong identification with a leader, this metaphor provides a way of thinking about the political unity of contemporary constellations of reactionary political movements. The Prince Rupert’s Drop perhaps becomes useful again as a metaphor to think about the possible effects of this kind of political sensibility. But whether it is the new political constellations themselves or society as a whole that fragments when the tail potentiality is released, depends ever more on the political sensibility, levels of rationality and critical reflexivity of a public which under conditions of computational capitalism looks increasingly unprepared in a digital age.


[1] See for a 130,000 frames per second video of the Prince Rupert’s Drop as it explodes.


Beckmann, J., Johnston, W., Francis, w., Griffith, J. W. (1846) A history of inventions, discoveries, and origins, London, H.G. Bohn.

Brodsley, L., Frank, C. and Steeds, J.W. (1986) Prince Rupert’s Drops, Notes and Records of the Royal Society of London, Vol. 41, No. 1 (Oct., 1986), pp. 1-26

Emerson, W. (1851 [2005]) Address to the Citizens of Concord on the Fugitive Slave Law, in The Selected Lectures of Ralph Waldo Emerson,  University of Georgia Press.

Freud, S. (1921) Group Psychology and the Analysis of the Ego, London: WW Norton & Company.

Hobbes, T. (1662) Problematica Physica, translated in English in 1682 as Seven Philosophical Problems, pp. 36-39, 146-148.

Hooke, R. (1665) Micrographia or Some Physiologial Descriptions of Minute Bodies made by Magnifying Glasses with Observation and Inquiries thereupon, London: Jo. Martyn and Ja. Allestry.

Pepys, S. (1662) Monday 13 January 1661/62, The Diary of Samuel Pepys, accessed 09/01/2017,


In this post I want to introduce the notion of infrasomatization. The intention is to expand the categories of exosomatization and endosomatization developed by Alfred J. Lotka and Nicholas Georgescu-Roegen in their work on ecological economics and by Karl Popper in relation to what he called objective knowledge (see Lotka 1925; Georgescu-Roegen 1970, 1972, 1978; Popper 1972). The terms exosomatization and endosomatization have more recently deployed in the work of Bernard Stiegler in relation to thinking about human augmentation and digital technologies, particularly in relation to the anthropocene (see for example, Stiegler 2015a). The notion of infrasomatization I want to use as a contribution to thinking about the questions raised by these concepts, but also to move away from a binary between endosomatic and exosomatic, by introducing a third term. First it might be useful to briefly survey the earlier uses of these terms.

Alfred J.  Lotka described the world as a giant engine and argued that man and nature should be understood holistically, particularly to show how human activity had an influence upon the operation of what he called the “world engine” (Lotka, 1925: 331). For Lotka, what he called exosomatic elements are different from genetic, endosomatic organs like arms, legs or hands. Exosomatic elements are tools and other instruments used by man to produce, exchange and consume energy in some form. Exosomatic organs, therefore, are an extension of the natural functions of man and the upshot of economic production. As he argued,

In place of slow adaptation of anatomical structure and physiological function in successive generations by selective survival, increased adaptation has been achieved by the incomparably more rapid development of ‘artificial’ aids to our native receptor–effector apparatus, in a process that might be termed exosomatic evolution (Lotka, 1945: 188).

Nicholas Georgescu-Roegen used and developed Lotka’s ideas of biophysical economics, particularly in The Entropy Law and the economic problem (1970), Energy and economic myths (1972) and Inequality, limits and growth from a bioeconomic viewpoint (1978). He argued,

Apart from a few insignificant exceptions, all species other than man use only endosomatic instruments — as Alfred Lotka proposed to call those instruments (legs, claws, wings, etc.) which belong to the individual organism by birth. Man alone came, in time, to use a club, which does not belong to him by birth, but which extended his endosomatic arm and increased its power. At that point in time, man’s evolution transcended the biological limits to include also (and primarily) the evolution of exosomatic instruments, i.e., of in- struments produced by man but not belonging to his body. That is why man can now fly in the sky or swim under water even though his body has no wings, no fins, and no gills (Georgescu-Roegen, 1972: 81).

We might summarise the distinction drawn by Georgescu-Roegen as between:

endosomatic instruments (legs, claws, wings, etc.) which belong to the individual organism by birth 

exosomatic instruments, that is, of instruments produced by man but not belonging to his body (Georgescu-Roegen, 1972: 81) 

Karl Popper (1972) similarly drew the notion of exosomatisation from biology arguing (against Hume) that the specificity of human reason is related to the exosomatic processes of externalisation of reason as writing, which enables the possibility of criticism and therefore of the correction of incorrect inferences (Popper 1972: 98).  Popper argued that Hume claims that “in practice we make… inferences, on the basis of repetition or habit” a psychology Popper describes as “primitive”.  Indeed, Popper further argues, that “without the development of an exosomatic descriptive language – a language which, like a tool, develops outside the body-there can be no object for our critical discussion”  and that through the externalisation of language “a linguistic third world can emerge; and it is only in this way, and only in this third world, that the problems and standards of rational criticism can develop” (Popper 1972: 120). He expands, arguing,

Animal evolution proceeds largely, though not exclusively, by the modification of organs (or behaviour) or the emergence of new organs (or behaviour). Human evolution proceeds, largely, by developing new organs outside our bodies or persons: ‘exosomati-cally’, as biologists call it, or ‘extra-personally’. These new organs are tools, or weapons, or machines, or houses… The rudimentary beginnings of this exosomatic development can of course be found among animals. The making of lairs, or dens, or nests, is an early achievement. I may also remind you that beavers build very ingenious, dams. But man, instead of growing better eyes and ears, grows spectacles, microscopes, telescopes, telephones, and hearing aids. And instead ofgrowing swifter and swifter legs, he grows swifter and swifter motor cars (Popper 1972: 238).

But as Popper was particularly interested in the development of rationality in and through the capacity for the externalisation of the processes of communication, in language and through the materialisation of thoughts in a medium of expression, he argued that “instead of growing better memories and brains, we grow paper, pens, pencils, typewriters; dictaphones, the printing press, and libraries” (Popper 1972: 239).

Similarly, Bernard Stiegler has begun deploying the concepts of endosomatic and exosomatic in his more recent work (see Stiegler 2015a, 2015b), arguing,

Marx and Engels showed at the beginning of The German Ideology (1845) that humanity consists above all in a process of exosomatization that pursues evolution no longer through somatic but through artificial organs (which was already glimpsed by Herder 70 years prior to these two early theorists of the role of technology in the formation of social relations and knowledge). But humankind has discovered to its stupefaction that this exosomatization is now directly and deliberately produced by the market — and, with respect to the immense transformations to which it gives rise, without offering any choice other than, in the best case, the profitability of investment, or, in the worst case, the pure speculation involved in the increasingly tight connection between the casino economy, marketing and R&D conceived according to inherently short-term, and therefore speculative, models of disruption (Stiegler 2015a).

Today of course, we have new forms of externalisation which complicate the picture of mere externalisation of what have been described by Popper as internal thoughts and ideas made exosomatic. Not only do I claim that computational techniques and technologies differ from previous materialisations, but are also troublingly constitutive of and able to frame how those externalisations are made (see Berry 2011, 2014). I haven’t the space here to explore the specificities of the materialities of previous mediums and their capacity to shape thoughts and ideas, but I want to highlight the difference of computational forms in their processual shaping and reshaping, that is the very fluidity of the moment of a new kind of externalisation under the conditions of computation. Indeed, this can be detected in terms of the anxiety currently exhibited by a public that has begun to note the automation and datafication of everyday life and the wider effects of a financialized economy and the resultant claims for the capacity for individuation and critical thinking, for example in the development of “fake news” but also the recent use of social media in the election of Donald Trump.

So I want to claim here that data technologies are deployed as what I am calling infrasomatizations. That is, that they are not just exosomatizations, not just the production of tools or instruments. Infrasomatizations are, rather,  the production of constitutive infrastructures. Indeed, infrasomatizations rely on a complex fusion of endosomatic capacities and exosomatic technics to create what we might call algorithmic governance (Berns and Rouvroy 2013). So we might consider the way in which infrasomatizations differ in relation to the claims made by Popper, for example, for the role of exosomatization in the development of the capacity for reason and critical thinking.

By infrasomatization I drawing on the Latin infra as meaning ‘below’ but also its use in anatomy where infra refers to below or under a part of the body. Therefore as I previously explained, infrasomatization does not refer to an instrumental notion of technology, but rather the capacity for framing or creating the conditions of possibility for a particular knowledge milieu. In this sense, certain exosomatizations are actually infrasomatizations, that is when they are built into the lived environment and act to provide context and associations, both material and symbolic.

Through the creation of specific infrasomatic formations, temporary or otherwise, new modes of knowing and thinking, assembling and acting can be made possible by bringing scale technologies together to create infrastructures. Infrastructure is commonly understood as the basic physical and organizational structures and facilities (e.g. buildings, roads, power supplies) needed for the operation of a society or enterprise. It is also sometimes understood as the social and economic infrastructure of a country. Indeed, Parks argues, the word infrastructure “emerged in the early twentieth century as a collective term for the subordinate parts of an undertaking; substructure, foundation”, that is, as what “engineers refer to as ‘stuff you can kick’” (Parks 2015: 355). Similarly Easterling argues, “the word infrastructure typically conjures up associations with physical networks of transportation, communication, or utilities. Infrastructure is considered to be a hidden substrate – the binding medium or current between objects of positive consequence, shape or law” (Easterling 2016: 11).

But infrastructure is not just the built environment, the cables and wires, the water pipes and transport networks, it is also the technical a priori created in and through computation. It is also notable that talk of infrastructure seems to allow us to get a grip on the ephemerality of data and computation, its seemingly concreteness as a concept, contrasts with that of clouds, streams, files and flows. So we hear about cables and wires, satellites and receivers, chips and boards, and the sheer thingness of these physical objects. But we also need to consider stacks and layers, software and code, algorithms and patterns, together with shared standards, diagrams, interfaces and organisational structures.

Infrasomatizations can be thought of as social-structuring technologies, they have an obduracy that can be mobilised to support specific instances of thought, rationality and action. They are latent technologies that are made to be already ready for use, to be configured and reconfigured, and built into particular constellations that form the underlying structures for social and psychic individuation. Infrasomatizations also gestures toward a kind of gigantism, the sheer massiveness of fundamental technologies and resources. Their size usefully contrasting with the minuteness or ephemerality of the kinds of personal devices that are increasingly merely interfaces or gateways to underlying infrastructural systems.Today, we talk a lot about data infrastructures, computational materiality for the highly digital sociality we live in today, especially the questions raised in the relations between the social and social media 
The key question for me is how infrasomatizations are created as infrastructure, and more particularly how these new forms of infrastructure are positioned to change or replace existing institutions. This allows us to think about institutions as knowing-spaces, and how they force us to consider the political economic issues of making institutions, combined with a focus on creating specific epistemic communities within them – for example in remaking the university. By institution I am gesturing to specific organizations founded for a religious, educational, professional, or social purpose, such as a university. An institution is a material constellation of bodies, affects, histories, technologies, infrastructures and cultures which is organized but requires infrasomatization to function. By organization I mean a specifically ordered, assembled, and structured group of people for a particular purpose, for example a business or government department or a political organization.
By connecting the knowledge formations, affective and cognitive styles, and performances made possible within an institution, structured by the particular constellations of infrasomatizations deployed, we might begin to create the grounds for political intervention.  For example, Andrew Feenberg has argued that a critical theory of technology requires “counter-acting the tendencies towards domination in the technological a priori” through the “materialization of values” (Feenberg 2013: 613). Thus tactical infrasomatizations are also possible – here gesturing towards the rich theoretical work on tactical media which has been extremely important for media activism and theory (see Garcia and Lovink 1997; Raley 2009). Indeed, as Stiegler has argued,

the reticulated digital infrastructure that supports the data economy… can and must be inverted into a neganthropic infrastructure founded on hermeneutic digital technology in the service of dis-automatisation. That is, it should be based on collective investment of the productivity gains derived from automatisation in a culture of knowing how to do, live and think (Stiegler 2016: 15-16). 

For Feenberg these can be found at specific intervention points within the materialisation of this a priori, such as in design processes. Feenberg argues that “design is the mediation through which the potential for domination contained in scientific-technical rationality enters the social world as a civilisational project” (Feenberg 2013: 613). By ascertaining how infrasomatization effect knowledge formations, we can work to produce new knowledges and practices that contest particular institutional structures.

Understanding the relationship between infrasomatization and organization and then to the form of the institution is crucial to constructing progressive institutions. This provides the possibility of contestation of problematic institutional forms, and particularly of the increasingly computational aspect. Hence, we might consider the need for infrasomatic critique, and the subsequent possibility for contesting the emerging forms of computational technologies, structures, systems and processes.


Berns, T. and Rouvroy, A. (2013) Gouvernementalité algorithmique et perspectives d’émancipation : le disparate comme condition d’individuation par la relation?, accessed 14/12/2016,

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave Macmillan.

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury.

Feenberg, A. (2013) Marcuse’s Phenomenology: Reading Chapter Six of One-Dimensional Man, Constellations, Volume 20, Number 4, pp. 604-614.

Garcia, D. and Lovink, G. (1997) The ABC of Tactical Media, Nettime, accessed 15/09/16,

Georgescu-Roegen, N. (1970/2011) The Entropy Law and the Economic Problem, in Bonaiuti, M. (Ed.), From Bioeconomics to Degrowth: Georgescu-Roegen’s ‘New Economics’ in Eight Essays, London: Routledge Studies in Ecological Economics, pp. 49–57.

Georgescu-Roegen, N., (1972/2011). Energy and Economic Myths, in Bonaiuti, M. (Ed.), From Bioeconomics to Degrowth: Georgescu-Roegen’s ‘New Economics’ in Eight Essays, London: Routledge Studies in Ecological Economics, pp. 58–92

Georgescu-Roegen, N., (1978/2011) Inequality, Limits and Growth From a
Bioeconomic Viewpoint, in Bonaiuti, M. (Ed.), From Bioeconomics to Degrowth: Georgescu-Roegen’s ‘New Economics’ in Eight Essays, London: Routledge Studies in Ecological Economics, pp. 103–113 (2011).

Easterling, K. (2016) Extrastatecraft: The Power of Infrastructure Space, London: Verso.

Lotka, A.J., 1925. Elements of Physical Biology, Baltimore: William & Wilkins Company.

Parks, L. (2015) “Stuff you can kick”: Towards a theory of Media Infrastructures. In Between the humanities and the digital, (Eds, Svensson, P. & Goldberg, D.T.) MIT Press, Cambridge, Massachusetts, pp. 355-373.

Popper, K. (1972) Objective Knowledge: An Evolutionary Approach, Oxford: University of Oxford Press.

Raley, R. (2009) Tactical Media, Minneapolis: University of Minnesota Press.

Stiegler, B. (2015a) Power, Powerlessness, Thinking, and Future, Los Angeles Review of Books,!

Stiegler, B. (2015b) Symptomatology of the Month of January 2015 in France, accessed 14/12/2016,

Stiegler, B. (2016) The Automatic Society, volume 1: The Future of Work, Cambridge: Polity.

Six Theses on Computational Attention

Thesis 1: Computational attention is a reconfiguration of human attention around a new historical constellation of intelligibility related to technically mediated signalling (e.g. individuated “touch-events”), for example, clicks, touch, taps, nudges, notifications, etc.

Thesis 2: Computational attention is reassembled through labour to make this new mediated attention possible through computational objects, devices, systems and ideologies. It is delegated to and prescribed from technical devices, funnelled and massaged through algorithmic interfaces.

Thesis 3: The subjectivity appropriate to a digital age is reconstructed in relation to this fundamental reconfiguration of human attention under conditions of computation, e.g. enframed and patterned. It is a positive subjectivity in terms of its capacity to generate positive signals of interaction and movement.

Apple implementation of “tapbacks” in Messages App on iOS 10

Thesis 4: New grammars of hyper-attention are developed, so to “pay attention” becomes to “tapback”, to provide a signal by a technical gesture transmitted through a technical medium (“likes”, “hearts”, emoticons”).[1] To attention is to click or touch, to anti-attention is to exit (from the app, the webpage, the social group, the country).

Thesis 5: As technical attentioning becomes more important, traditional signalling of attention becomes secondary to the collection of postdigital metrics of attention. For example, how attentive where they? What are they attending to? How can I signal my attention? What are they paying attention to?

Thesis 6: The mediation of attention becomes crucial in the governmentality of postdigital political economy. We must signal that we are “paying attention”, through computational devices we gesture our attentioning. Hence, we increasingly are encouraged to leave attentioning traces through digital interactions on interfaces.

These theses are drawn from a presentation given at the conference Attention humaine / Exo-attention computationnelle in Grenobles, October 2016, organised by Yves Citton.


[1] as an example of signalling attention, Apple uses what it calls tapbacks via its messages application. These trigger both visual and haptic feedback to demonstrate attention to the conversation. 

Tactical Infrastructures

Infrastructures are currently the subject of much scholarly and activist critique (Hu 2015; Parks and Starosielski 2015; Plantin et al 2016; Starosielski 2015). Perhaps not so much in terms of their critically dissected effects and influences as a form of ideology critique, but more in terms of a new recognition of their importance as conditions of possibility for forms of knowing and acting together with the creation of epistemic stability and modes of knowledge that can be instrumentalised in particular ways (for a discussion, see Berry 2014).[1] In contrast, rather than describe existing infrastructure I would like to think through the way in which counter-infrastructures can be thought about as tactical infrastructures. That is, how through the creation of specific formations, temporary or otherwise, new modes of knowing and thinking, assembling and acting can be made possible by bringing scale technologies together. By tactical infrastructures I am, of course, gesturing towards the rich theoretical work on tactical media which has been extremely important for media activism and theory (see Garcia and Lovink 1997; Raley 2009).[2] I also think it is useful to point towards the work of Liu (2016) and his recent conceptualisation of critical infrastructure studies. I am also drawing on the work of Feenberg who has argued that a critical theory of technology requires “counter-acting the tendencies towards domination in the technological a priori” through the “materialization of values” (Feenberg 2013: 613). This Feenberg argues can be found at specific intervention points within the materialisation of this a priori, such as in design processes. Feenberg argues that “design is the mediation through which the potential for domination contained in scientific-technical rationality enters the social world as a civilisational project” (Feenberg 2013: 613).

Infrastructure is commonly understood as the basic physical and organizational structures and facilities (e.g. buildings, roads, power supplies) needed for the operation of a society or enterprise. It is also sometimes understood as the social and economic infrastructure of a country. Indeed, Parks argues, the word infrastructure “emerged in the early twentieth century as a collective term for the subordinate parts of an undertaking; substructure, foundation”, that is, as what “engineers refer to as ‘stuff you can kick’” (Parks 2015: 355). Infrastructure can be thought of as pre-socialised technologies, not in the sense that the material elements of infrastructure are non-social, but that although they themselves are sociotechnical materialities, they have reached what we might call their quasi-teleological condition. They are latent technologies that are made to be already ready for use, to be configured and reconfigured, and built into particular constellations that form the underlying structures for institutions. Heidegger would say that they are made to stand by. Infrastructure talk also gestures toward a kind of gigantism, the sheer massiveness of fundamental technologies and resources – their size usefully contrasting with the minuteness or ephemerality of the kinds of personal devices that are increasingly merely interfaces or gateways to underlying infrastructural systems.[3] 
Apple highlighting the M9 section of its A9 processor

Today, we talk a lot about data infrastructures, computational materiality for the highly digital sociality we live in today, especially the questions raised in the relations between the social and social media (see also Lovink 2012). But also in terms of the anxiety currently exhibited by a public that has begun to note the datafication of everyday life and the wider effects of a financialized economy. It is also notable that talk of infrastructure seems to allow us to get a grip on the ephemerality of data and computation, its seemingly concreteness as a notion, contrasts with that of clouds, streams, files and flows. So we hear about cables and wires, satellites and receivers, chips and boards, and the sheer thingness of these physical objects, stands in symbolically for the difficulty of visualising the computational objects. I use symbolically deliberately because merely discursively asserting a materiality does not make it material. Indeed, most people have never seen an “actual” satellite or an undersea data cable, nor indeed a computer chip or circuit board. They rely on mediations provided by visual representations such as photography, or videos, that show the thingness of the cables or chips by photographing it. One is reminded of Apple’s turn towards a postdigital aesthetic of chip representation, gloriously shown in glossy marketing videos and component diagrams, displayed in keynote presentations that whilst iterating the chip speeds, transistor numbers and cycles, dives and swoops over the visualised architecture of the device, selecting and showing black squares in light borders on the CPUs of their phones and computers (see Berry and Dieter 2015). The showing of the chip materiality, seeing it in place, within the device, translates the threatening opaqueness of computation into a design motif.  

In terms of infrastructures we might consider the ways in which particular practices of Silicon Valley have become prevalent and tend to shape thinking across the fields effected by computation. For example, the recent turn towards what has come to be called “platformisation”, that is the construction of a single digital system that acts as a technical monopoly within a particular sector (for a discussion, see Gillespie 2010; Plantin et al 2016). The obvious example here is Facebook in social media. Equally, with discussion over digital research infrastructures there is an understandable tendency towards centralisation and the development of unitary and standardised platforms for the digitalisation, archiving, researching and transformation of such data. Whilst most of these attempts have so far ended in failure, it remains the case that the desire and temptation to develop such a system is very strong as it creates a transitional path towards institutionalisation of infrastuctures and the alignment of technologies towards an institutional goal or end. 

I am interested here in how infrastructures become institutions, and more particularly how tactical infrastructures can be positioned to change or replace institutions. As Tocqueville observed, “what we call necessary institutions are often no more than institutions to which we have grown accustomed.” This is to take forward Merton’s notion that only appropriate institutional change can breakthrough problematic or tragic institutional effects (Merton 1948). I also want to move our attention beyond infrastructures and point their tactical use towards making institutions in order to think about institutions as knowing-spaces, and how they force us to consider the political economic issues of making institutions, combined with a focus on creating specific epistemic communities within them. Here I am thinking of Fleck’s notion of a “thought collective” as a “nexus of knowledge which manifests itself in a social constraint upon thought” (Fleck 1979:64). For example, Benkler (2006: 23) has called for a “core common infrastructure”, or a space of non-owned cultural production, making links between the particular values embedded in free-software infrastructures and the kinds of institutions and communities made possible. As he writes, particularly in relation to the internet, “if all network components are owned… then for any communication there must be a willing sender, a willing recipient, and a willing infrastructure owner. In a pure property regime, infrastructure owners have a say over whether, and the conditions under which, others in their society will communicate with each other. It is precisely the power to prevent others from communicating that makes infrastructure ownership a valuable enterprise” (Benkler 2006: 155).

We can think about how institutions generate alternate instantiations of space and time, which thus create the conditions of possibility for new forms of intentionality, thought and action. This also connects to the regulatory aspects of the forms of governance made possible in and through the structures of organization of an institution, and how through combining tactical infrastructures with activism they might be subverted or jammed. In Fleck’s terms this would be to think about the relation between the “thought style”, “thought collective” and the problem of infrastructures. He writes, the thought style “is characterized by common features in the problems of interest to a thought collective, by the judgment which the thought collective considers evident, and by the methods which it applies as a means of cognition” (Fleck 1979: 99). By connecting the affective and cognitive styles and performances made possible within an institution, structured by the particular constellations of infrastructures deployed, we might begin to create the grounds for intervention through the kinds of tactical infrastructure for institutional change that I am exploring here. 

By institution I am gesturing to specific organizations founded for a religious, educational, professional, or social purpose, such as a university or research lab. An institution is a material constellation of bodies, affects, histories, technologies, infrastructures and cultures which is organized. By organization I mean a specifically ordered, assembled, and structured group of people for a particular purpose, for example a business or government department or a political organization.[4] Understanding the relationship between infrastructure to organization and then to the form of the institution is crucial to constructing progressive institutions and providing the possibility of contestation of institutional form, not just their actions.[5] Hence, to turn to the question of infrastructure critique is to also turn towards ideology critique, and the subsequent possibility for unbuilding and, if necessary, creating counter-infrastructures or tactical infrastructures.[6] To do this it seems to me we have to avoid the dangers of a form of infrastructural fetishism that seeks to show the multiplicity of infrastructures through a project of aestheticisation of infrastructure, whether through photography, data visualisations, or any other media form. What is important is identifying how humans act within institutions and in doing so how they create and recreate fundamental elements of social interaction – i.e. how do thought-collectives and thought-styles adapt? – but also if we change the fundamental structures of infrastructures supporting institutions and their organization, can we strengthen the agencies of actors and the institution to work progressively. 
[1] There is a need for more ideology critique in relation to infrastructures, making use of the work of STS, software studies, sociology of technology, etc. With the ongoing critical turn in relation to algorithms, data, software and code we should hope to see more work done in infrastructure critique. 
[2] Garcia and Lovink write that “Tactical Media are what happens when the cheap ‘do it yourself’ media, made possible by the revolution in consumer electronics and expanded forms of distribution (from public access cable to the internet) are exploited by groups and individuals who feel aggrieved by or excluded from the wider culture. Tactical media do not just report events, as they are never impartial they always participate and it is this that more than anything separates them from mainstream media… above all [it is] mobility that most characterizes the tactical practitioner. The desire and capability to combine or jump from one media to another creating a continuous supply of mutants and hybrids. To cross boarders, connecting and re-wiring a variety of disciplines and always taking full advantage of the free spaces in the media that are continually appearing because of the pace of technological change and regulatory uncertainty” (Garcia and Lovink 1997).
[3] Here there are normative questions here in regard to scale and methodology, particularly in relation to disciplinary biases towards certain scales and approaches. More so considering the way in which the digital creates multi-scalar potentials for research methods – it is interesting to consider the way in which scales still performs a “truth” directing role nonetheless.
[4] There are strong connections here to Lovink and Rossiter’s (2013) notion of Orgnets. 
[5] This is to radicalise the notion of research infrastructures in the digital humanities, for example, where debates over the proper form of research infrastructures tend towards instrumental concerns over technical construction and deployment rather than normative or political issues. For example, many universities select their technical support infrastructures from large proprietary software companies, so in the case of email, Microsoft or IBM might be chosen to allow “integration” with their Office suite, but without considering the wider issues of data sharing, transatlantic movement of student data and work, data mining and so forth. Alan Liu is currently working very interestingly on some of these problematics under the notion of critical infrastructure studies, see Liu (2016). 
[6] This article has been inspired by much fruitful discussion with Michael Dieter, who I have been working with on the notion of critical infrastructures, particularly dark infrastructures, alter-infrastructures and vernacular infrastructures represented by Aaaaarg, Monoskop, Sci-Hub and related infrastructure projects. But we might also think about hacking “toolkits”, crypto parties, hack-labs, copy-parties, data activism and maker spaces as further examples of new structural environments for new forms of knowledge creation, dissemination and storage. Mapping the underlying infrastructures is an important task for thinking about how tactical infrastructures might be deployed. 

Benkler, Y (2006) The Wealth of Networks. London: Yale University Press. Bergson, H. (1998) Creative Evolution. New York: Dover Publications.
Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury
Berry, D. M. and Dieter, M. (2015) Postdigital Aesthetics: Art, Computation and Design, Basingstoke: Palgave. 
Feenberg, A. (2013) Marcuse’s Phenomenology: Reading Chapter Six of One-Dimensional Man, Constellations, Volume 20, Number 4, pp. 604-614.
Fleck, L. (1979) Genesis and Development of a Scientific Fact, London: The University of Chicago Press.

Garcia, D. and Lovink, G. (1997) The ABC of Tactical Media, Nettime, accessed 15/09/16,

Gillespie T (2010) The politics of “platforms”, New Media & Society 12(3): 347–364.

Hu T.-H. (2015) A Prehistory of the Cloud. Cambridge, MA: The MIT Press.

Liu, A. (2016) Against the Cultural Singularity: Digital Humanities and Critical Infrastructure Studies, Youtube, accessed 15/09/16,

Lovink, G. (2012) What is Social in Social Media?, e-flux journal, #40, December 2012. 
Lovink, G. and Rossiter, N (2013) Organised Networks: Weak Ties to Strong Links, Occupy Times, accessed 04/04/2014,
Merton, R. K. (1948) The Self-Fulfilling Prophecy, The Antioch Review, Vol. 8, No. 2 (Summer, 1948), pp. 193-210 
Parks, L. (2015) “Stuff you can kick”: Towards a theory of Media Infrastructures. In Between the humanities and the digital, (Eds, Svensson, P. & Goldberg, D.T.) MIT Press, Cambridge, Massachusetts, pp. 355-373.
Parks, L. and Starosielski, N. (2015) Signal Traffic: Critical Studies of Media Infrastructures, Illinois: University of Illinois Press.

Plantin, J. C., Lagoze, C.,  Edwards, P. N., and Sandvig, C. (2016) Infrastructure studies meet platform studies in the age of Google and Facebook, New Media & Society August 4, 2016, accessed 16/09/16,

Riley, R. (2009) Tactical Media, Minneapolis: University of Minnesota Press. 

Starosielski N. (2015) The Undersea Network. Durham, NC: Duke University Press.

The Digital Humanities Stack

Thinking about the structure of the digital humanities, it is always helpful if we can visualise it to provide some sort of map or overview. Here, I am exploring a way of representing the digital humanities through the common computer science technique of a software “stack“. This is the idea that a set of software components provides the infrastructure for a given computer system or platform. In a similar way, here I illustrate the discipline of digital humanities with a pictorial representation of the layers of abstraction in the image given below. This gives the reader an idea of what I am calling the digital humanities stack.

The Digital Humanities Stack, illustration by Marcus Leis Allion  (Berry 2016)

This type of diagram is common in computation and computer science to show how technologies are “stacked” on top of each other in growing levels of abstraction. Here, I use the method in a more illustrative and creative sense of showing the range of activities, practices, skills, technologies, and structures that could be said to make up the digital humanities as an ideal type. This is clearly a simplification, and is not meant to be prescriptive, rather it is aimed to be helpful for the newcomer to the digital humanities as it helps to understand how the varied elements that make up the digital humanities fit together. Whilst I can foresee criticisms about the make-up and ordering of this stack that I present here, nonetheless, I think it, more or less, provides a useful visual guide to how we can think about the various components of a digital humanities and contributes towards further understanding digital humanities. I deliberately decided to leave out the “content” elements in terms of the specificity, for example, of the different kinds of digital archive that we see across the digital humanities. I think that this is acceptable as the term digital archive does, I think, capture a wide range of digital databases and archival forms, although perhaps does not strongly enough signify the related material elements, for example in a “postdigital archive” that includes both digital and non-digital element. Relatedly, this diagram does not capture sufficiently, perhaps, something like the inclusion of a media archaeological collection in its materiality.

So this diagram can be read as the bottom levels indicating some of the fundamental elements of the digital humanities stack, such as computational thinking and knowledge representation, and then other elements that later build on these. Of course, diagrams simplify and even though I would have preferred for the critical and cultural critique to run through more of the layers, in the end it made for a more easily digestible visual representation if I didn’t over-complicate the diagram. The illustration here stretches the concept of a stack, in a strict computer science manner, as it includes institutional layers and non-computational elements, but as a heuristic for thinking about the digital humanities in its specificity, I think it can be helpful. As a version 1.0 of the digital humanities stack I look forward to reworkings of it and complication and re-articulations in the comments.

New Book: Digital Humanities

New book, Digital Humanities, authored by David M. Berry and Anders Fagerjord, on Polity under production and available in Apr 2017.