Category Archives: Philosophy of Software

Against the Computational Creep

In this short post I want to think about the limits of computation, not the limits theoretically of the application or theorisation of computation itself, but actually the limits to which computation within a particular context should be contained. This is necessarily a normative position, but what I am trying to explore is the limit at which computation, which can have great advantages to a process, institution or organisation, starts to undermine or corrode the way in which a group, institution or organisation is understood, functions or how it creates a shared set of meanings. Here though, I will limit myself to thinking about the theorisation of this, rather than its methodological implications, and how we might begin to develop a politics of computation that is able to test and articulate these limits and understand the development of a set of critical approaches which are also a politicisation of algorithms and of data.

By computational creep I am interested in the development of computation as a process rather than an outcome or thing (Ross 2017: 14). This notion of “creep” has been usefully identified by Ross in relation to extreme political movements that take place by what he calls “positive intermingling”.[1] I think that this is a useful way to think of the way in which computationalism, and here I do not merely mean the idea that consciousness in modelled on computation (e.g. see Golumbia 2009), but more broadly as a set of ideas and style of thought which argues that computational approaches are by their very nature superior to other ways of thinking and doing (Berry 2011, 2014). This is also related to the notion that anything that has not been “disrupted” by computation is, by definition, inferior in some sense, or is latent material awaiting its eventual disruption or reinvention through the application of computation.  I would like to argue that this process of computational creep takes six stages:

  1. Visionary-computational: Computation suggested as a solution to existing system or informal process. These discourses are articulated with very little critical attention to the detail of making computational systems or the problems they create. Usually, as Golumbia (2017) explains, these draw on metaphysics of information and computation that bear little relation to material reality of the eventual or existing computational systems. It is here, in particular, that the taken-for-grantness of the improvements of computation are uncritically deployed, usually with little resistance. 
  2. Proto-computational: One-off prototypes developed to create notional efficiencies, manage processes, or to ease reporting and aggregation of data. Often there is a discourse associated with the idea that this creates “new ways of seeing” that enable patterns to be identified which were previously missed. These systems often do not meet the required needs but these early failures, rather than taken as questioning the computational, serve to justify more computation, often more radically implemented with greater change being called for in relation to making the computational work. 
  3. Micro-computational: A wider justification for small scale projects to implement computational microsystems. These often are complemented by the discursive rationalisation of informal processes or the justification of these systems due to the greater insight they produce. This is where a decision has been taken to begin computational development, sometimes at a lightweight scale, but nonetheless, the language of computation both technically and as metaphor starts to be deployed more earnestly as justification. 
  4. Meso-computational: Medium-scale systems created which draw from or supplement the existing minimal computation already in process. This discourse is often manifest in multiple, sometime co-exisiting and incompatible computations, differing ways of thinking about algorithms as a solution to problems, and multiple and competing data acquisition and storage practices. At this stage the computational is beyond question, it is taken as a priori that a computational system is required, and where there are failures, more computation and more social change to facilitate it are demanded. 
  5. Macro-computational: Large-scale investment to manage what has become a complex informational and computational ecology. This discourse is often associated with attempts to create interoperability through mediating systems or provision for new interfaces for legacy computational systems. At this stage, computation is now seen as as source of innovation and disruption that rationalises the social processes and helps manage and control individuals. These are taken to be a good in and of themselves to avoid mistakes, bad behaviour, poor social outcomes or suchlike. The computational is now essentially metaphysical in its justificatory deployment and the suggestion that computation might be making things worse is usually met with derision. 
  6. Infra-computational: Calls for overhaul of and/or replacement of major components of the systems, perhaps with a platform, and the rationalisation of social practices through user interface design, hierarchical group controls over data, and centralised data stores. This discourse is often accompanied by large scale data tracking, monitoring and control over individual work and practices. This is where the the notion of top-view, that is the idea of management information systems (MIS), data analytics, large-scale Big Data pattern-matching and control through algorithmic intervention are often reinforced. In this phase a system of data requires free movement of the data through a system through an open definition (e.g. open data, open access, open knowledge), which allows standardisation and sharability of data entities, and therefore of further processing and softwarization. This phase often serves as an imaginary and is therefore not necessarily ever completed, its failures serving as further justification for new infrastructures and new systems to replace earlier failed versions. 

This line of thinking draws on the work of David Golumbia, particularly the notion of Matryoshka Dolls that he takes from the work of Phillip Mirowski. This is in relation to the notion of multiple levels or shells of ideas, that form a system of thinking, but which is itself not necessarily coherent as such, nor lacking in contradiction, particularly at different layers of the shells. This “Mirowski calls the ‘’Russian doll’ approach to the integration of research and praxis in the modern world'” (Golumbia 2017: 5). Golumbia makes links between this way of thinking about neoliberalism as a style of thinking that utilises this multi-layered aspect and technolibertarianism, but here I want to think about computational approaches more broadly, that is as instrumental rational techniques of organisation. In other words, I want to point to the way in which computation is implemented, usually in a small scale way, within an institutional context, and which acts as an entry-point for further rationalisation and computation. This early opening creates the opportunity for more intensive computation which is implicated in a bricolage fashion, that is that, at least initially, there is not a systematic attempt to replace an existing system, but over time, and with the addition to and accretion of computational partialities, calls become greater for the overhaul of what is now a tangled and somewhat contradictory series of micro-computationalisms, into a more broad computational system or platform. Eventually this leads to a macro- or infra-computational environment which can be described as functioning as algorithmic governmentality, but which remains ever unfinished with inconsistencies, bugs and irrationalities throughout the system (see Berns and Rouvroy 2013). The key point is that in all stages of computationally adapting an existing process, there are multiple overlapping and sometimes contradictory processes in operation, even in large-scale computation.

Here I think that Golumbia’s discussion of the “sacred myths among the digerati” is very important here, as it is this set of myths that are unquestioned especially early on in the development of a computational project. Especially at what I am calling the visionary-computational and proto-computational phases, but equally throughout the growth in computational penetration. Some of these myths include: claims of efficiency, the notion of cost savings, the idea of communications improvement, and the safeguarding corporate or group memory. In other words, before a computerisation project is started, these justifications are already being mobilised in order to justify it, without any critical attention to where these a priori claims originate and their likely truth content.

This use of computation is not just limited to standardised systems, of course, and by which I mean instrumental-rational systems that are converted from a paper-based process into a software-based process. Indeed, computation is increasingly being deployed in a cultural and sociological capacity, so for example to manage individuals and their psychological and physical well-being, to manage or shape culture through interventions and monitoring, and the capacity to work together, as teams and groups, and hence to shape particular kinds of subjectivity. Here there are questions more generally for automation and the creation of what we might call human-free technical systems, but also more generally for the conditions of possibility for what Bernard Stiegler calls the Automatic Society (Stiegler 2015). It is also related to the notion of digital and computational systems in areas not previously thought of as amenable to computation, for example in the humanities, as is represented by the growth of digital humanities (Berry 2012, Berry and Fagerjord 2017).

That is to say, that “the world of the digital is everywhere structured by these fictionalist equivocations over the meanings of central terms, equivocations that derive an enormous part of their power from the appearance that they refer to technological and so material and so metaphysical reality” (Golumbia 2017: 34). Of course, the reality is that these claims are often unexamined and uncritically accepted, even when they are corrosive in their implementations. Where these computationalisms are disseminated and their creep goes beyond social and cultural norms, it is right that we ask: how much computation can a particular social group or institution stand, and what should be the response to it? (See Berry 2014: 193 for a discussion in relation to democracy). It should certainly be the case that we must move beyond accepting a partial success of computation to imply that more computation is by necessity better. So by critiquing computational creep, through the notion of the structure of the Russian doll in relation to computational processes of justification and implementation, together with the metaphysical a priori claims for the superiority of computational systems, we are better able to develop a means of containment or algorithmic criticism. Thus through a critical theory that provides a ground for normative responses to the unchecked growth of computations across multiple aspects of our lives and society we can look to the possibilities of computation without seeing it as necessarily inevitable or deterministic of our social life (see Berry 2014).


[1] The title “Against the Computational Creep” is reference to the very compelling book Against the Fascist Creep by Alexander Reid Ross. The intention is not to make an equivalence between fascism and computation, rather I am more interested in the concept of the “creep” which Ross explains involves small scale, gradual use of particular techniques, the importation of ways of thinking or the use of a form of entryism. In this article, of course, the notion of the computational creep is therefore referring to the piecemeal use of computation, or the importation of computational practices and metaphors into a previously non-computational arena or sphere, and the resultant change in the ways of doing, ways of seeing and ways of being that this computational softwarization tends to produce. 


Berns, T. and Rouvroy, A. (2013) Gouvernementalité algorithmique et perspectives d’émancipation : le disparate comme condition d’individuation par la relation?, accessed 14/12/2016,

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave Macmillan.

Berry, D. M. (2012) Understanding Digital Humanities, Basingstoke: Palgrave.

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury

Berry, D. M. and Fagerjord, A. (2017) Digital Humanities: Knowledge and Critique in a Digital Age, Cambridge: Polity.

Golumbia, D. (2009) The Cultural Logic of Computation, Harvard University Press.

Golumbia, D. (2017) Mirowski as Critic of the Digital, boundary 2 symposium, “Neoliberalism, Its Ontology and Genealogy: The Work and Context of Philip Mirowski”, University of Pittsburgh, March 16-17, 2017

Stiegler, B. (2016) The Automatic Society, Cambridge: Polity.


Six Theses on Computational Attention

Thesis 1: Computational attention is a reconfiguration of human attention around a new historical constellation of intelligibility related to technically mediated signalling (e.g. individuated “touch-events”), for example, clicks, touch, taps, nudges, notifications, etc.

Thesis 2: Computational attention is reassembled through labour to make this new mediated attention possible through computational objects, devices, systems and ideologies. It is delegated to and prescribed from technical devices, funnelled and massaged through algorithmic interfaces.

Thesis 3: The subjectivity appropriate to a digital age is reconstructed in relation to this fundamental reconfiguration of human attention under conditions of computation, e.g. enframed and patterned. It is a positive subjectivity in terms of its capacity to generate positive signals of interaction and movement.

Apple implementation of “tapbacks” in Messages App on iOS 10

Thesis 4: New grammars of hyper-attention are developed, so to “pay attention” becomes to “tapback”, to provide a signal by a technical gesture transmitted through a technical medium (“likes”, “hearts”, emoticons”).[1] To attention is to click or touch, to anti-attention is to exit (from the app, the webpage, the social group, the country).

Thesis 5: As technical attentioning becomes more important, traditional signalling of attention becomes secondary to the collection of postdigital metrics of attention. For example, how attentive where they? What are they attending to? How can I signal my attention? What are they paying attention to?

Thesis 6: The mediation of attention becomes crucial in the governmentality of postdigital political economy. We must signal that we are “paying attention”, through computational devices we gesture our attentioning. Hence, we increasingly are encouraged to leave attentioning traces through digital interactions on interfaces.

These theses are drawn from a presentation given at the conference Attention humaine / Exo-attention computationnelle in Grenobles, October 2016, organised by Yves Citton.


[1] as an example of signalling attention, Apple uses what it calls tapbacks via its messages application. These trigger both visual and haptic feedback to demonstrate attention to the conversation. 

Tactical Infrastructures

Infrastructures are currently the subject of much scholarly and activist critique (Hu 2015; Parks and Starosielski 2015; Plantin et al 2016; Starosielski 2015). Perhaps not so much in terms of their critically dissected effects and influences as a form of ideology critique, but more in terms of a new recognition of their importance as conditions of possibility for forms of knowing and acting together with the creation of epistemic stability and modes of knowledge that can be instrumentalised in particular ways (for a discussion, see Berry 2014).[1] In contrast, rather than describe existing infrastructure I would like to think through the way in which counter-infrastructures can be thought about as tactical infrastructures. That is, how through the creation of specific formations, temporary or otherwise, new modes of knowing and thinking, assembling and acting can be made possible by bringing scale technologies together. By tactical infrastructures I am, of course, gesturing towards the rich theoretical work on tactical media which has been extremely important for media activism and theory (see Garcia and Lovink 1997; Raley 2009).[2] I also think it is useful to point towards the work of Liu (2016) and his recent conceptualisation of critical infrastructure studies. I am also drawing on the work of Feenberg who has argued that a critical theory of technology requires “counter-acting the tendencies towards domination in the technological a priori” through the “materialization of values” (Feenberg 2013: 613). This Feenberg argues can be found at specific intervention points within the materialisation of this a priori, such as in design processes. Feenberg argues that “design is the mediation through which the potential for domination contained in scientific-technical rationality enters the social world as a civilisational project” (Feenberg 2013: 613).

Infrastructure is commonly understood as the basic physical and organizational structures and facilities (e.g. buildings, roads, power supplies) needed for the operation of a society or enterprise. It is also sometimes understood as the social and economic infrastructure of a country. Indeed, Parks argues, the word infrastructure “emerged in the early twentieth century as a collective term for the subordinate parts of an undertaking; substructure, foundation”, that is, as what “engineers refer to as ‘stuff you can kick’” (Parks 2015: 355). Infrastructure can be thought of as pre-socialised technologies, not in the sense that the material elements of infrastructure are non-social, but that although they themselves are sociotechnical materialities, they have reached what we might call their quasi-teleological condition. They are latent technologies that are made to be already ready for use, to be configured and reconfigured, and built into particular constellations that form the underlying structures for institutions. Heidegger would say that they are made to stand by. Infrastructure talk also gestures toward a kind of gigantism, the sheer massiveness of fundamental technologies and resources – their size usefully contrasting with the minuteness or ephemerality of the kinds of personal devices that are increasingly merely interfaces or gateways to underlying infrastructural systems.[3] 
Apple highlighting the M9 section of its A9 processor

Today, we talk a lot about data infrastructures, computational materiality for the highly digital sociality we live in today, especially the questions raised in the relations between the social and social media (see also Lovink 2012). But also in terms of the anxiety currently exhibited by a public that has begun to note the datafication of everyday life and the wider effects of a financialized economy. It is also notable that talk of infrastructure seems to allow us to get a grip on the ephemerality of data and computation, its seemingly concreteness as a notion, contrasts with that of clouds, streams, files and flows. So we hear about cables and wires, satellites and receivers, chips and boards, and the sheer thingness of these physical objects, stands in symbolically for the difficulty of visualising the computational objects. I use symbolically deliberately because merely discursively asserting a materiality does not make it material. Indeed, most people have never seen an “actual” satellite or an undersea data cable, nor indeed a computer chip or circuit board. They rely on mediations provided by visual representations such as photography, or videos, that show the thingness of the cables or chips by photographing it. One is reminded of Apple’s turn towards a postdigital aesthetic of chip representation, gloriously shown in glossy marketing videos and component diagrams, displayed in keynote presentations that whilst iterating the chip speeds, transistor numbers and cycles, dives and swoops over the visualised architecture of the device, selecting and showing black squares in light borders on the CPUs of their phones and computers (see Berry and Dieter 2015). The showing of the chip materiality, seeing it in place, within the device, translates the threatening opaqueness of computation into a design motif.  

In terms of infrastructures we might consider the ways in which particular practices of Silicon Valley have become prevalent and tend to shape thinking across the fields effected by computation. For example, the recent turn towards what has come to be called “platformisation”, that is the construction of a single digital system that acts as a technical monopoly within a particular sector (for a discussion, see Gillespie 2010; Plantin et al 2016). The obvious example here is Facebook in social media. Equally, with discussion over digital research infrastructures there is an understandable tendency towards centralisation and the development of unitary and standardised platforms for the digitalisation, archiving, researching and transformation of such data. Whilst most of these attempts have so far ended in failure, it remains the case that the desire and temptation to develop such a system is very strong as it creates a transitional path towards institutionalisation of infrastuctures and the alignment of technologies towards an institutional goal or end. 

I am interested here in how infrastructures become institutions, and more particularly how tactical infrastructures can be positioned to change or replace institutions. As Tocqueville observed, “what we call necessary institutions are often no more than institutions to which we have grown accustomed.” This is to take forward Merton’s notion that only appropriate institutional change can breakthrough problematic or tragic institutional effects (Merton 1948). I also want to move our attention beyond infrastructures and point their tactical use towards making institutions in order to think about institutions as knowing-spaces, and how they force us to consider the political economic issues of making institutions, combined with a focus on creating specific epistemic communities within them. Here I am thinking of Fleck’s notion of a “thought collective” as a “nexus of knowledge which manifests itself in a social constraint upon thought” (Fleck 1979:64). For example, Benkler (2006: 23) has called for a “core common infrastructure”, or a space of non-owned cultural production, making links between the particular values embedded in free-software infrastructures and the kinds of institutions and communities made possible. As he writes, particularly in relation to the internet, “if all network components are owned… then for any communication there must be a willing sender, a willing recipient, and a willing infrastructure owner. In a pure property regime, infrastructure owners have a say over whether, and the conditions under which, others in their society will communicate with each other. It is precisely the power to prevent others from communicating that makes infrastructure ownership a valuable enterprise” (Benkler 2006: 155).

We can think about how institutions generate alternate instantiations of space and time, which thus create the conditions of possibility for new forms of intentionality, thought and action. This also connects to the regulatory aspects of the forms of governance made possible in and through the structures of organization of an institution, and how through combining tactical infrastructures with activism they might be subverted or jammed. In Fleck’s terms this would be to think about the relation between the “thought style”, “thought collective” and the problem of infrastructures. He writes, the thought style “is characterized by common features in the problems of interest to a thought collective, by the judgment which the thought collective considers evident, and by the methods which it applies as a means of cognition” (Fleck 1979: 99). By connecting the affective and cognitive styles and performances made possible within an institution, structured by the particular constellations of infrastructures deployed, we might begin to create the grounds for intervention through the kinds of tactical infrastructure for institutional change that I am exploring here. 

By institution I am gesturing to specific organizations founded for a religious, educational, professional, or social purpose, such as a university or research lab. An institution is a material constellation of bodies, affects, histories, technologies, infrastructures and cultures which is organized. By organization I mean a specifically ordered, assembled, and structured group of people for a particular purpose, for example a business or government department or a political organization.[4] Understanding the relationship between infrastructure to organization and then to the form of the institution is crucial to constructing progressive institutions and providing the possibility of contestation of institutional form, not just their actions.[5] Hence, to turn to the question of infrastructure critique is to also turn towards ideology critique, and the subsequent possibility for unbuilding and, if necessary, creating counter-infrastructures or tactical infrastructures.[6] To do this it seems to me we have to avoid the dangers of a form of infrastructural fetishism that seeks to show the multiplicity of infrastructures through a project of aestheticisation of infrastructure, whether through photography, data visualisations, or any other media form. What is important is identifying how humans act within institutions and in doing so how they create and recreate fundamental elements of social interaction – i.e. how do thought-collectives and thought-styles adapt? – but also if we change the fundamental structures of infrastructures supporting institutions and their organization, can we strengthen the agencies of actors and the institution to work progressively. 
[1] There is a need for more ideology critique in relation to infrastructures, making use of the work of STS, software studies, sociology of technology, etc. With the ongoing critical turn in relation to algorithms, data, software and code we should hope to see more work done in infrastructure critique. 
[2] Garcia and Lovink write that “Tactical Media are what happens when the cheap ‘do it yourself’ media, made possible by the revolution in consumer electronics and expanded forms of distribution (from public access cable to the internet) are exploited by groups and individuals who feel aggrieved by or excluded from the wider culture. Tactical media do not just report events, as they are never impartial they always participate and it is this that more than anything separates them from mainstream media… above all [it is] mobility that most characterizes the tactical practitioner. The desire and capability to combine or jump from one media to another creating a continuous supply of mutants and hybrids. To cross boarders, connecting and re-wiring a variety of disciplines and always taking full advantage of the free spaces in the media that are continually appearing because of the pace of technological change and regulatory uncertainty” (Garcia and Lovink 1997).
[3] Here there are normative questions here in regard to scale and methodology, particularly in relation to disciplinary biases towards certain scales and approaches. More so considering the way in which the digital creates multi-scalar potentials for research methods – it is interesting to consider the way in which scales still performs a “truth” directing role nonetheless.
[4] There are strong connections here to Lovink and Rossiter’s (2013) notion of Orgnets. 
[5] This is to radicalise the notion of research infrastructures in the digital humanities, for example, where debates over the proper form of research infrastructures tend towards instrumental concerns over technical construction and deployment rather than normative or political issues. For example, many universities select their technical support infrastructures from large proprietary software companies, so in the case of email, Microsoft or IBM might be chosen to allow “integration” with their Office suite, but without considering the wider issues of data sharing, transatlantic movement of student data and work, data mining and so forth. Alan Liu is currently working very interestingly on some of these problematics under the notion of critical infrastructure studies, see Liu (2016). 
[6] This article has been inspired by much fruitful discussion with Michael Dieter, who I have been working with on the notion of critical infrastructures, particularly dark infrastructures, alter-infrastructures and vernacular infrastructures represented by Aaaaarg, Monoskop, Sci-Hub and related infrastructure projects. But we might also think about hacking “toolkits”, crypto parties, hack-labs, copy-parties, data activism and maker spaces as further examples of new structural environments for new forms of knowledge creation, dissemination and storage. Mapping the underlying infrastructures is an important task for thinking about how tactical infrastructures might be deployed. 

Benkler, Y (2006) The Wealth of Networks. London: Yale University Press. Bergson, H. (1998) Creative Evolution. New York: Dover Publications.
Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury
Berry, D. M. and Dieter, M. (2015) Postdigital Aesthetics: Art, Computation and Design, Basingstoke: Palgave. 
Feenberg, A. (2013) Marcuse’s Phenomenology: Reading Chapter Six of One-Dimensional Man, Constellations, Volume 20, Number 4, pp. 604-614.
Fleck, L. (1979) Genesis and Development of a Scientific Fact, London: The University of Chicago Press.

Garcia, D. and Lovink, G. (1997) The ABC of Tactical Media, Nettime, accessed 15/09/16,

Gillespie T (2010) The politics of “platforms”, New Media & Society 12(3): 347–364.

Hu T.-H. (2015) A Prehistory of the Cloud. Cambridge, MA: The MIT Press.

Liu, A. (2016) Against the Cultural Singularity: Digital Humanities and Critical Infrastructure Studies, Youtube, accessed 15/09/16,

Lovink, G. (2012) What is Social in Social Media?, e-flux journal, #40, December 2012. 
Lovink, G. and Rossiter, N (2013) Organised Networks: Weak Ties to Strong Links, Occupy Times, accessed 04/04/2014,
Merton, R. K. (1948) The Self-Fulfilling Prophecy, The Antioch Review, Vol. 8, No. 2 (Summer, 1948), pp. 193-210 
Parks, L. (2015) “Stuff you can kick”: Towards a theory of Media Infrastructures. In Between the humanities and the digital, (Eds, Svensson, P. & Goldberg, D.T.) MIT Press, Cambridge, Massachusetts, pp. 355-373.
Parks, L. and Starosielski, N. (2015) Signal Traffic: Critical Studies of Media Infrastructures, Illinois: University of Illinois Press.

Plantin, J. C., Lagoze, C.,  Edwards, P. N., and Sandvig, C. (2016) Infrastructure studies meet platform studies in the age of Google and Facebook, New Media & Society August 4, 2016, accessed 16/09/16,

Riley, R. (2009) Tactical Media, Minneapolis: University of Minnesota Press. 

Starosielski N. (2015) The Undersea Network. Durham, NC: Duke University Press.


French philosopher François Laruelle

If we take seriously the claims of François Laruelle, the French “non-philosopher”,[1] that it is possible to undertake a “non-philosophy”, a project that seeks out a “non-philosophical kernel” within a philosophical system, then what would be the implications of what we might call a “non-media”? For example in seeking a “non-Euclidean” Marxism, Laruelle argues that we could uncover, in some sense, the non-philosophical “ingredient”, as Galloway (2012: 6) calls it. That is to find the non-Marxist “kernel” which serves as the starting point, both as “symptom and model”. Indeed, Laruelle himself undertook such a project in relation to Marxism in Introduction au non-marxisme (2000)where he sought to “‘philosophically impoverish’ Marxism, with the goal of ‘universalising’ it through a ‘scientific mode of universalisation'” (Galloway 2012: 194). That is, that Laruelle, in Galloway’s interpretation, seeks to develop “an ontological platform that, while leaving room for certain kinds of casualty and relation, radically denies exchange in any form whatsoever” (Galloway 2012: 194). Indeed, Galloway argues Laruelle in,

deviating too from ‘process philosophers’ like Deleuze, who must necessarily endorse exchange at some level, Laruelle advocates a mode of expression that is irreversible. He does this through a number of interconnected concepts, the most important of which being ‘determination-in-the-last-instance” (DLI). Having kidnapped the term from its Althusserian Marxist home, Laruelle uses DLI to show how there can exist casualty that is not reciprocal, how a ‘relation’ can exist that is not at the same time a ‘relation of exchange’, indeed how a universe might look if it was not already forged implicitly from the mould of market capitalism (Galloway 2012: 195).

That is, the exploration and therefore refusal of exchange at the level of ontology, rather than the level of politics – at what we might call, following Laclau and Mouffe (2001), the political. Laruelle argues that this “philosophical” decision is the target of his critique,

What is probably wounding for philosophers is the fact that, from the point of view I have adopted, I am obliged to posit that there is no principle of choice between a classical type of ontology and the deconstruction of that ontology. There is no reason to choose one rather than the other. This is a problem that I have discussed at great length in my work (Les philosophies de la différence), whether there can be a principle of choice between philosophies. Ultimately, it is the problem of the philosophical decision (Laruelle, quoted in Mackay 2005).

Thus the diagnosis is not the lack of a philosophy at the centre of works, but rather an excess, which results in the subversion of a system of abstraction that in some sense problematically uses exchange as an axiomatic. This is the notion that exchange renders possible the full convertibility of entities as a form of philosophical violence towards the multiplicity of the “real” and which founds a form of thinking that becomes hegemonic as a condition of possibility for thought – even radially anti-captialist thought within philosophy as defined. Instead Laruelle suggests we get the essence of something from the “real”, as it were. Indeed, when Derrida asked “Where do [you] get this [essence] from?” Laruelle answered “I get it from the thing itself” (Laruelle, quoted in Mackay 2005). Laruelle argues,

We start from the One, rather than arriving at it. We start from the One, which is to say that if we go anywhere, it will be toward the World, toward Being. And I frequently use a formulation which is obviously shocking to philosophers and particularly those of a Platonist or Plotinian bent: it’s not the One that is beyond Being, it is Being that is beyond the One. It is Being that is the other of the One (Laruelle, quoted in Mackay 2005).

Within this formulation, Galloway argues that there is evidence that Laruelle is a “vulgar determinist and unapologetically so”. That is, that for Laruelle,

The infrastructure of the material base is a given-without-givenness because and only because of its ability to condition and determine – unidirectionally, irreversibly, and in the ‘last instance’ – whatever it might condition and determine, in this case the superstructure. Thus the infrastructure stands as ‘given’ while still never partaking in ‘givenness’, neither as a thing having appeared as a result of previous givenness, nor a present givenness engendering the offspring of subsequent givens (Galloway 2012: 199).

There are clear totalitarian implications in this formulation of determinism running from a material base, and the resultant liquidation of the possibility of autonomy as a critical concept. Indeed, the overtones of a kind of scientific Marxism, read through a kind of simplistic Newtonian theorisation of science seems itself limited and regressive. Not only politically, which surrenders self-determination and individuation to the causal first cause, called the “One”, to which “clones” are subservient in determinism.  As Srnicek describes,

At the highest level, one ultimately reaches what is called the One – the highest principle from which everything derives. Now there are a number of reasons why this highest level must be one – meaning singular, unified and simple. The first basic reason is that if it weren’t simple, then it could be decomposed into its constituent parts. The highest principle of reality must not admit of multiplicity, but must instead be the singular principle that itself explains multiplicity. Now as a simple principle, it must be impossible to predicate anything of it (Srnicek 2011: 2)


Non-philosophy is not just a theory but a practice. It re-writes or re-describes particular philosophies, but in a non-transcendental form—non-aesthetics, non-Spinozism, non-Deleuzianism, and so on. It takes philosophical concepts and subtracts any transcendence from them in order to see them, not as representations, but as parts of the Real or as alongside the Real (Mullarkey: 134). 

An approach to media that incorporates non-philosophy, a “non-media”, would then be a rigorous non-philosophical knowledge of the “kernel” of media, the deterministic causality ground in an ontology of media that stresses it unidirectional causality and ultimate status as the ground of possibility. That is, a set of realist claims from a rigorously non-philosophical tradition, seeking to get at the core of media, from the “thing in itself”. Indeed, Laruelle himself has talked about the links between philosophy and media, as Thacker outlines,

Near the end of his essay “The Truth According to Hermes,” François Laruelle points out the fundamental link between philosophy and media. All philosophy, says Laruelle, subscribes to the “communicational decision,” that everything that exists can be communicated. In this self-inscribed world, all secrets exist only to be communicated, all that is not-said is simply that which is not-yet-said. One senses that, for Laruelle, the communicational decision is even more insidious than the philosophical decision. It’s one thing to claim that everything that exists, exists for a reason. It’s quite another to claim that everything-that-exists- for-a-reason is immediately and transparently communicable, in its reason for existing. If the philosophical decision is a variant on the principle of sufficient reason, then the communicational decision adds on top of it the communicability of meaning (Thacker 2010: 24).

Hermes was the swift-footed messenger,
trusted ambassador of all the gods,
and conductor of shades to Hades. 

Indeed, Laruelle disdains the communicational, “meaning, always more meaning! Information, always more information! Such is the mantra of hermeto-logical Difference, which mixes together truth and communication, the real and information” (Laruelle, quoted in Thacker 2010: 24). A radically realist non-media would then dismiss the interpretative moment of understanding for the fidelity to the One, the possibility of the source of the communicational in terms of the “material” base from which all causality springs.[2] This would seem to be a step that not only dismisses any possibility of a philosophical or theoretical understanding of media in terms of its materiality, as such, but also the possibility of any agency created as a result of the material or technical a priori of media. In this sense, it is not difficult to share Derrida’s repudiation of the possibility of a non-philosophy but also to question its claims to seek to work at the level of ontology, outside of philosophy, but also of interpretation (see Mackay 2005). Instead does such a claim rather represent a totalising moment in thought, an example or claim of a non-mediated experience, devoid of thought itself and therefore of the possibility of critical reason and politic? The “terror” of the real, in such a formulation, represents not a radical break with contemporary thought, here cast as “philosophical”, but rather of the real as the horizon of thought and its limit.[3]


[1] Ray Brassier (2003) has described François Laruelle as the most important unknown philosopher working in Europe today”.
[2] It appears that the notion of “material” in this account, increasingly looks less like a historical materialist account and rather as a synchronic metaphysics cast as a “realism” outside of human history as such. Philosophy, history, culture and so forth being merely the epiphenomenon “determined” by the “material” or perhaps better, real, base of the “thing in itself”.
[3] It is worth noting the contradiction of a position that claims such an overarching determinism stemming from the One, will inevitably undermine its own claims to veracity by the fact that such determinism would naturally have “caused” Laruelle to have written his books in the first case, and hence providing no possibility of agency to assess the claims made, as the individual agency (such as there is) of the readers and commentators would also be locked into this deterministic structure. Such that, even were one to detect such claims, ones consciousness having been formed from this source, would themselves be tainted by that determinism. 


Brassier, R. (2003) Axiomatic Heresy: The Non-Philosophy of Francois Laruelle, Radical Philosophy 121, Sep/Oct 2003.

Galloway, A. R. (2012) Laruelle, Anti-Capitalist, in Mullarkey, J. and Smith, A. P. (eds.) Laruelle and Non-Philosophy, Edinburgh: Edinburgh University Press.

Laclau, E. and Mouffe, C. (2001) Hegemony and Socialist Strategy: Towards a Radical Democratic Politics, London: Verso Books.

Mackay, R. (2005) Controversy over the Possibility of a Science of Philosophy, La Decision Philosophique No. 5, April 1988, pp62-76, accessed 17/01/2014,

Mullarkey, J. (2006) Post-continental Philosophy: An Outline, London: Continuum.

Srnicek, N. (2011) François Laruelle, the One and the Non-Philosophical Tradition, Pli: The Warwick Journal Of Philosophy, 22, 2011, p. 187-198, accessed 17/1/2014,

Thacker, E. (2010) Mystique of Mysticism, in Galloway, A. R., French Theory Today An Introduction to Possible Futures,  Published by TPSNY/Erudio Editions.

Signposts for the Future of Computal Media

I would like to begin to outline what I think are some of the important trajectories to keep an eye on in regard to what I increasingly think of as computal media. That is, the broad area dependent on computational processing technologies, or areas soon to be colonised by such technologies.

In order to do this I want to examine a number of key moments that I want to use to structure thinking about the softwarization of media.  By “softwarization”, I means broadly the notion of Andreessen (2011) that “software is eating the world” (see also Berry 2011; Manovich 2013).  Softwarization is then a process of the application of computation (see Schlueter Langdon 2003), in this case, to all forms of historical media, but also in the generation of born-digital media. 
However, this process of softwarization is tentative, multi-directional, contested, and moving on multiple strata at different modularities and speeds. We therefore need to develop critiques of the concepts that drive these processes of softwarization but also to think about what kind of experiences that make the epistemological categories of the computal possible. For example, one feature that distinguishes the computal is its division into surfaces, rough or pleasant, and concealed inaccessible structures. 
It seems to me that this task is rightly one that is a critical undertaking. That is, as an historical materialism that understands the key organising principles of our experience are produced by ideas developed with the array of social forces that human beings have themselves created. This includes understanding the computal subject as an agent dynamically contributing and responding to the world. 
So I want to now look at a number of moments to draw out some of what I think are the key developments to be attentive to in computal media. That is, not the future of new media as such, but rather “possibilities” within computal media, sometimes latent but also apparent. 
The Industrial Internet
A new paradigm called the “industrial internet” is emerging, a computational, real-time streaming ecology that is reconfigured in terms of digital flows, fluidities and movement. In the new industrial internet the paradigmatic metaphor I want to use is real-time streaming technologies and the data flows, processual stream-based engines and the computal interfaces and computal “glue” holding them together. This is the internet of things and the softwarization of everyday life and represents the beginning of a post-digital experience of computation as such.
This calls for us to stop thinking about the digital as something static, discrete and object-like and instead consider ‘trajectories’ and computational logistics. In hindsight, for example, it is possible to see that new media such as CDs and DVDs were only ever the first step on the road to a truly computational media world. Capturing bits and disconnecting them from wider networks, placing them on plastic discs and stacking them in shops for us to go visit and buy seems bizarrely pedestrian today. 
Taking account of such media and related cultural practices becomes increasing algorithmic and as such media becomes itself mediated via software. At the same time previous media forms are increasingly digitalised and placed in databases, viewed not on original equipment but accessed through software devices, browsers and apps. As all media becomes algorithmic, it is subject to monitoring and control at a level to which we are not accustomed – e.g. Amazon mass deletion of Orwell’s1984 from personal Kindles in 2009 (Stone 2009).

The imminent rolling out of the sensor-based world of the internet of things is underway with companies such as Broadcom developing Wireless Internet Connectivity for Embedded Devices, “WICED Direct will allow OEMs to develop wearable sensors — pedometers, heart-rate monitors, keycards — and clothing that transmit everyday data to the cloud via a connected smartphone or tablet” (Seppala 2013). Additionally Apple is developing new technology in this area with its iBeacon software layer which uses Bluetooth Low Energy (BLE) to create location-aware micro-devices, and “can enable a mobile user to navigate and interact with specific regions geofenced by low cost signal emitters that can be placed anywhere, including indoors, and even on moving targets” (Dilger 2013). In fact, the “dual nature of the iBeacons is really interesting as well. We can receive content from the beacons, but we can be them as well” (Kosner 2013).  This relies on Bluetooth version 4.0, also called “Bluetooth Smart”, that supports devices that can be powered for many months by a small button battery, and in some cases for years. Indeed,

BLE is especially useful in places (like inside a shopping mall) where GPS location data my not be reliably available. The sensitivity is also greater than either GPS or WiFi triangulation. BLE allows for interactions as far away as 160 feet, but doesn’t require surface contact (Kosner 2013).

These new computational sensors enable Local Positioning Systems (LPS) or micro-location, in contrast to the less precise technology of Global Positioning Systems (GPS). These “location based applications can enable personal navigation and the tracking or positioning of assets” to the centimetre, rather than the metre, and hence have great potential as tracking systems inside buildings and facilities (Feldman 2009).

Bring Your Own Device (BYOD)
This shift also includes the move from relatively static desktop computers to mobile computers and to tablet based devices – consumerisation of tech. Indeed, according to the International Telecommunications Union (ITU 2012: 1), in 2012 there were 6 billion mobile devices (up from 2.7 billion in 2006), with YouTube alone streaming video media of 200 terrabytes per day. Indeed, by the end of 2011, 2.3 billion people (i.e. one in three) were using the Internet (ITU 2012: 3).
Users are creating 1.8 zettabytes of data annually by 2011 and this is expected to grow to 7.9 zettabytes by 2015 (Kalakota 2011). To put this in perspective, a zettabyte is is equal to 1 billion terabytes – clearly at these scales the storage sizes become increasingly difficult for humans to comprehend. A zettabyte is roughly equal in size to twenty-five billion Blu-ray discs or 250 billion DVDs.

The acceptance by users and providers of the consumerisation of technology has also opened up the space for the development of “wearables” and these highly intimate devices are under current development, with the most prominent example being Google Glass. Often low-power devices, making use of the BLE and iBeacon type technologies, they augment our existing devices, such as the mobile phone, rather than outright replacing them, but offer new functionalities, such as fitness monitors, notification interfaces, contextual systems and so forth. 

The Personal Cloud (PC)
These pressures are creating an explosion in data and a corresponding expansion in various forms of digital media (currently uploaded to corporate clouds). As a counter move to the existence of massive centralised corporate systems there is a call for Personal Cloud (PCs), a decentralisation of data from the big cloud providers (Facebook, Google, etc.) into smaller personal spaces (see Personal Cloud 2013). Conceptually this is interesting in relation to BYOD. 
This of course changes our relationship to knowledge, and the forms of knowledge which we keep and are able to use. Archives are increasingly viewed through the lens of computation, both in terms of cataloging and storage but also in terms of remediation and configuration. Practices around these knowledges are also shifting, and as social media demonstrates, new forms of sharing and interaction are made possible. Personal Cloud also has links to decentralised authentication technologies (e.g. DAuth vs OAuth).
Digital Media, Social Reading, Sprints
It has taken digital a lot longer that many had thought to provide a serious challenge to print, but it seems to me that we are now in a new moment in which digital texts enable screen-reading, if it is not an anachronism to still call it that, as a sustained reading practice. The are lots of experiments in this space, e.g. my notion of the “minigraph” (Berry 2013) or the mini-monograph, technical reports, the “multigraph” (McCormick 2013), pamphlets, and so forth. Also new means for writing (e.g. Quip) and social reading and collaborative writing (e.g. Book Sprints)
DIY Encryption and Cypherpunks
Together, these technologies create contours of a new communicational landscape appearing before us, and into which computational media mediates use and interaction. Phones become smart phones and media devices that can identify, monitor and control our actions and behaviour  through anticipatory computing. Whilst seemingly freeing us, we are also increasingly enclosed within an algorithmic cage that attempts to surround us with contextual advertising and behavioural nudges.
One response could be “Critical Encryption Practices”, the dual moment of a form of computal literacy and understanding of encryption technologies and cryptography combined with critical reflexive approaches. Cypherpunk approaches tend towards an individualistic libertarianism, but there remains a critical reflexive space opened up by their practices. Commentators are often dismissive of encryption as a “mere” technical solution to what is also a political problem of widespread surveillance. 
CV Dazzle Make-up, Adam Harvey
However, Critical encryption practices could provide both the political, technical and educative moments required for the kinds of media literacies important today – e.g. in civil society. 
This includes critical treatment of and reflection on crypto-systems such as cryptocurrencies like Bitcoin, and the kinds of cybernetic imaginaries that often accompany them. Critical encryption practices could also develop signaling systems – e.g. new aesthetic and Adam Harvey’s work. 
Augmediated Reality
The idea of supplementing or augmenting reality is being transformed with the notion of “augmediated” technologies (Mann 2001). These are technologies that offer a radical mediation of everyday life via screenic forms (such as “Glass”) to co-construct a computally generated synoptic meta-reality formed of video feeds, augmented technology and real-time streams and notification. Intel’s work of Perceptual Computing is a useful example of this kind of media form. 
The New Aesthetic
These factors raise issues of new aesthetic forms related to the computal. For example, augmediated aesthetics suggests new forms of experience in relation to its aesthetic mediation (Berry et al 2012). The continuing “glitch” digital aesthetic remains interesting in relation to the new aesthetic and aesthetic practice more generally (see Briz 2013). Indeed, the aesthetics of encryption, e.g. “complex monochromatic encryption patterns,” the mediation of encryption etc. offers new ways of thinking about the aesthetic in relation to digital media more generally and the post-digital (see Berry et al 2013)
Bumblehive and Veillance
Within a security setting one of the key aspects is data collection and it comes as no surprise that the US has been at the forefront of rolling out gigantic data archive systems, with the NSA (National Security Agency) building the country’s biggest spy centre at its Utah Data Center (Bamford 2012) – codenamed Bumblehive. This centre has a “capacity that will soon have to be measured in yottabytes, which is 1 trillion terabytes or a quadrillion gigabytes” (Poitras et al 2013). 
This is connected to the notion of the comprehensive collection of data because, “if you’re looking for a needle in the haystack, you need a haystack,” according to Jeremy Bash, the former CIA chief of staff. The scale of the data collection is staggering and according to Davies (2013) the UK GCHQ has placed, “more than 200 probes on transatlantic cables and is processing 600m ‘telephone events’ a day as well as up to 39m gigabytes of internet traffic. Veillance – both surveillance and sousveillence are made easier with mobile devices and cloud computing. We face rising challenges for responding to these issues. 
The Internet vs The Stacks
The internet as we tend to think of it has become increasingly colonised by massive corporate technology stacks. These companies, Google, Apple, Facebook, Amazon, Microsoft, are called collectively “The Stacks” (Sterling, quoted in Emami 2012) – vertically integrated giant social media corporations. As Sterling observes,

[There’s] a new phenomena that I like to call the Stacks [vertically integrated social media]. And we’ve got five of them — Google, Facebook, Amazon, Apple and Microsoft. The future of the stacks is basically to take over the internet and render it irrelevant. They’re not hostile to the internet — they’re just [looking after] their own situation. And they all think they’ll be the one Stack… and render the others irrelevant… They’re annihilating other media… The Lords of the Stacks (Sterling, quoted in Emami 2012).

The Stacks also raise the issue of resistance and what we might call counter-stacks,  hacking the stacks, and movements like Indieweb and Personal Cloud computing are interesting responses to them and Sterling optimistically thinks, “they’ll all be rendered irrelevant. That’s the future of the Stacks” (Sterling, quoted in Emami 2012). 
The Indieweb
The Indieweb is a kind of DIY response to the Stacks and an attempt to wrestle back some control back from these corporate giants (Finley 2013). These Indieweb developers offer an interesting perspective on what is at stake in the current digital landscape, somewhat idealistic and technically oriented they nonetheless offer a site of critique. They are also notable for “building things”, often small scale, micro-format type things, decentralised and open source/free software in orientation. The indieweb is, then, “an effort to create a web that’s not so dependent on tech giants like Facebook, Twitter, and, yes, Google — a web that belongs not to one individual or one company, but to everyone” (Finley 2013).
Push Notification
This surface, or interactional layer, of the digital is hugely important for providing the foundations through which we interact with digital media (Berry 2011). Under development are new high-speed adaptive algorithmic interfaces (algorithmic GUIs) that can offer contextual information, and even reshape the entire interface itself, through the monitoring of our reactions to computational interfaces and feedback and sensor information from the computational device itself – e.g. Google Now. 
The Notification Layer
One of the key sites for reconciliation of the complexity of real-time streaming computing is the notification layer, which will increasingly by an application programming interface (API) and function much like a platform. This is very much the battle taking place between the “Stacks”, e.g. Google Now, Siri, Facebook Home, Microsoft “tiles”, etc. With the political economy of advertising being transformed with the move from web to mobile, notification layers threaten revenue streams. 
It is also a battle over subjectivity and the kind of subject constructed in these notification systems.
Real-time Data vs Big Data
We have been hearing a lot about “big data” and related data visualisation, methods, and so forth. Big data (exemplified by the NSA Prism programme) is largely a historical batch computing system. A much more difficult challenge is real-time stream processing, e.g. future NSA programmes called SHELLTRUMPET, MOONLIGHTPATH, SPINNERET and GCHQ Tempora programme. 
That is, monitoring in real-time, and being able to computationally spot patterns, undertake stream processing, etc.
Contextual Computing
With multiple sensors built into new mobile devices (e.g. camera, microphones, GPS, compass, gyroscopes, radios, etc.) new forms of real-time processing and aggregation become possible.  In some senses then this algorithmic process is the real-time construction of a person’s possible “futures” or their “futurity”, the idea, even, that eventually the curation systems will know “you” better than you know yourself – interesting for notions of ethics/ethos. This the computational real-time imaginary envisaged by corporations, like Google, that want to tell you what you should be doing next…
Anticipatory Computing
Our phones are now smart phones, and as such become media devices that can also be used to identify, monitor and control our actions and behavior  through anticipatory computing. Elements of subjectivity, judgment and cognitive capacities are increasingly delegated to algorithms and prescribed to us through our devices, and there is clearly the danger of a lack of critical reflexivity or even critical thought in this new subject. This new paradigm of anticipatory computing stresses the importance of connecting up multiple technologies to enable a new kind of intelligence within these technical devices. 
Towards a Critical Response to the Post-Digital
Computation in a post-digital age is fundamentally changing the way in which knowledge is created, used, shared and understood, and in doing so changing the relationship between knowledge and freedom. Indeed, following Foucault (1982) the “task of philosophy as a critical analysis of our world is something which is more and more important. Maybe the most certain of all philosophical problems is the problem of the present time, and of what we are, in this very moment… maybe to refuse what we are” (Dreyfus and Rabinow 1982: 216). 
One way of doing this is to think about Critical Encryption Practices, for example, and the way in which technical decisions (e.g. plaintext defaults on email) are made for us. The critique of knowledge also calls for us to question the coding of instrumentalised reason into the computal. This calls for a critique of computational knowledge and as such a critique of the society producing that knowledge. 
Andreessen, M. (2011) Why Software Is Eating The World, Wall Street Journal, August 20 2011,
Bamford, J. (2012) The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say), Wired, accessed 19/03/2012,
Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave Macmillan.
Berry, D. M. (2013) The Minigraph: The Future of the Monograph?, Stunlaw, accessed 29/08/2013,
Berry, D. M., Dartel, M. v., Dieter, M., Kasprzak, M. Muller, N., O’Reilly, R., and Vicente, J. L (2012) New Aesthetic, New Anxieties, Amsterdam: V2 Press.
Berry, D. M., Dieter, M., Gottlieb, B., and Voropai, L. (2013) Imaginary Museums, Computationality & the New Aesthetic, BWPWAP, Berlin: Transmediale.
Briz, N. (2013) Apple Computers, accessed 29/08/2013,
Davies, N. (2013) MI5 feared GCHQ went ‘too far’ over phone and internet monitoring, The Guardian, accessed 22/06/2013,
Dilger, D.E. (2013) Inside iOS 7: iBeacons enhance apps’ location awareness via Bluetooth LE,
AppleInsider, accessed 02/09/2013,

Emami, G (2012) Bruce Sterling At SXSW 2012: The Best Quotes, The Huffington Post, accessed 29/08/2013,
Feldman, S. (2009) Micro-Location Overview: Beyond the Metre…to the Centimetre, Sensors and Systems, accessed 02/09/2013,

Finley, K. (2013) Meet the Hackers Who Want to Jailbreak the Internet, Wired
ITU (2012) Measuring the Information Society, accessed 01/01/2013,
Kalakota, R. (2011) Big Data Infographic and Gartner 2012 Top 10 Strategic Tech Trends, accessed 05/05/2012,

Kosner, A. W. (2013) Why Micro-Location iBeacons May Be Apple’s Biggest New Feature For iOS 7, Forbes, accessed 02/09/2013,

Mann, S. (2001) Digital Destiny and Human Possibility in the Age of the Wearable Computer, London: Random House.

Manovich, L. (2013) Software Takes Command, MIT Press.
McCormick, T. (2013) From Monograph to Multigraph: the Distributed Book, LSE Blog: Impact of Social Sciences, accessed 02/09/2013,

Personal Cloud (2013) Personal Clouds, accessed 29/08/2013,
Poitras, L., Rosenbach, M., Schmid, F., Stark, H. and Stock, J. (2013) How the NSA Targets Germany and Europe, Spiegel, accessed 02/07/2013,
Schlueter Langdon, C. 2003. Does IT Matter? An HBR Debate–Letter from Chris Schlueter Langdon. Harvard Business Review (June): 16, accessed 26/08/2013, and
Seppala, T. J. (2013) Broadcom adds WiFi Direct to its embedded device platform, furthers our internet-of-things future, Engadget, accessed 02/09/2013,

Stone, B. (2009) Amazon Erases Orwell Books From Kindle, The New York Times, accessed 29/08/2013,

Phenomenological Approaches to the Computal: Some Reflections on Computation

3D Haptic Technology, University of Hull 2013

Computation is transforming the way in which knowledge is created, used, shared and understood, and in doing so changing the relationship between knowledge and freedom. It encourages us to ask questions about philosophy in a computational age and its relationship to the mode of production that acts as a condition of possibility for it. Today’s media are softwarized which imposes certain logics, structures and hierarchies of knowledge onto the processes of production and consumption. This is also becoming more evident with the advent of digital systems, smart algorithms, and real-time streaming media. We could therefore argue that the long predicted convergence of communications and computers, originally identified as “compunication” (see Oettinger and  Legates 1977; Lawrence 1983), has now fully arrived. The softwarised media leads us to consider how mediation is experienced through these algorithmic systems and the challenges for a phenomenology of the computal.

Contribution to Keynote Symposium at Conditions of Mediation ICA Pre-conference on 17th June 2013, Birkbeck, University of London. 

Exhaustive Media

This is the edited text of a talk given by David M. Berry at Transmediale 2013 at the Depletion Design panel. 

Today there is constant talk about the “fact” that we (or our societies and economies) are exhausted, depleted or in a state of decay (see Wiedemann and Zehle 2012).  This notion of decline is a common theme in Western society, for example Spengler’s The Decline of the West, and is articulated variously as a decline in morals, lack of respect for law, failing economic or military firepower, relative education levels, threats from ecological crisis, and so on. Talk of decline can spur societies into paroxysms of panic, self-criticism and calls for urgent political intervention. That is not to say that claims to relative decline are never real as such, indeed relative measures inevitably shift in respect to generation ageing and change, particularly in relation to other nations. However, the specific decline discourse is interesting for what it reveals about the concerns and interests of each population, and a particular generation within it, whether it be concerns with health, wealth, or intellectual ability, and so forth.

Karl Mannheim

The issue of a tendency inherent in a temporal social location, that is the certain definite modes of behaviour, feeling and thought in what we might call constantly repeated experience in a common location in the historical dimension of the social process is what Karl Mannheim called the generation entelechy (Mannheim 1952). This is the process of living in a changed world and is a re-evaluation of the inventory and the forgetting of that which is useful and covert and which is not yet won. In other words, the particular stratification of experience in relation to the the historical contexts of a specific generation – in both, what we might call, the inner and the outer dimensions of experience. This social process also naturally causes friction between different generation entelechies, such as that between an older and younger generation – there may also be moments of conflict within a generation entelechy or what Mannheim called generation units, although there is not space here to develop this question in relation to the digital.

The relative conditions of possibility, particularly in relation to what we might call the technical milieu for a generation entelechy, contribute towards slower or faster cultural, social, and economic change. The quicker the pace of social, technical, economic and cultural change is, the greater the likelihood that a particular generation location group will react to the changed situations by producing their own entelechy. Thus, individual and group experiences act as crystallising agents in this process, and plays out in notions of “being young”, “freshness”, “cool”, or “with it” in some sense, which acts to position generation entelechies in relation to each other both historically and culturally.

Mann identifies crucial features of a generation entelechy as (1) new participants in the cultural process are emerging, whilst (2) former participants in that process are continually disappearing; (3) members of one gneration can participate only in a temporally limited section of the historical process; and (4) it is therefore necessary to continually transmit the accumulated cultural heritage; (5) the transition from generation to generation is a continuous process (Mannheim 1952).

In relation to this general talk of depletion in Europe and the US, one of the recent decline-issues has been, particularly in the US and UK context, the worry about the decline of computational ability of young generations. More specifically the lack of digital literacy (or what I call elsewhere iteracy) of the new generations. In this specific discourse, the worry is articulated that a new generation is emerging that is not adequately prepared for what appears to be a deeply computational economic and cultural environment. This is usually, although not always, linked to a literal exhaustion of the new generation, the implication being a generation that is unprepared, apathetic, illiterate and/or disconnected. Often these claims are located within what Mannheim calls the “Intelligentsia”, he writes, “in every society there are social groups whose special task it is to provide an interpretation of the world for that society. We call these the “Intelligentsia” (Mannheim 1967: 9). It is no surprise, then, that in the instance of digital literacy we see the same strata across society, commenting on and debating the relative merits of computational competences, abilities and literacies at a number of different levels, but particularly in relation to the education of new generations through discussions of school, college and university digital literacies.

Some of these claims are necessarily the result of a form of generational transference of the older generations’ own worries concerning its inadequacies, in this case usually either (1) an inability to use the correct type of computational devices/systems; (2) a concern that the young are not using the computers in the correct manner that they themselves were taught, for example using a physical keyboard and mouse; or (3) a dismissal of the new forms of digitality that are seen as trivial, wasteful of time, and hence socially or economically unproductive, a classic example of this is social media. There are a number of themes and levels of analysis that are brought out in these discussions, often, but not limited to the question of moral failings of the new generation, but also to the technical abilities, economic possibilities, such as vocationalism, but also the ways of thinking appropriate to a perceived new environment or economic and technical ecology. This is similar to Foucault’s question of a generational ethos, as it were, and whether it might be helpful if we,

envisage modernity rather as an attitude than as a period of history. And by ‘attitude,’ I mean a mode of relating to contemporary reality; a voluntary choice made by certain people; in the end, a way of thinking and feeling; a way, too, of acting and behaving that at one and the same time marks a relation of belonging and presents itself as a task. A bit, no doubt, like what the Greeks called an ethos (Foucault 1984: 39). 

Immanuel Kant

Here, though, I want to take the problem of exhaustion of the new generations as a focus, the “exhausted” literally, as in terms of the Latin exhaust as “drained out”. In other words to ask the why, how, who, what is “drained out” and where to, in our highly computational cultures? That is, to turn the question around and identify exhaustion of the new generations as indeed an important site of concern, but that the exhaustion that the new generations are experiencing is not an apathy or lack of energy, but rather a product of the political economy, an ethos that results from being subject to the digital draining data, information and energy into technical systems through specific drainage points, operating through and on computational devices, and particularly intimate technologies like mobile phones, tablets and laptops. This is to focus on the extent to which digital media are increasingly becoming exhaustive media, and critically interrogate their function, form and content.

To put it another way, what would be the “enlightenment” in relation to the new exhaustive media and the software ecologies of trackers, web-bugs, beacons, apps, clouds and streams? If we are to agree with Kant that the enlightenment is the universal, free, public uses of reason (Kant 1991), how do we assure freedom of public reason in the digital age? As Foucault described, for Kant,

when one is reasoning only in order to use one’s reason, when one is reasoning as a reasonable being (and not as a cog in a machine), when one is reasoning as a member of reasonable humanity, then the use of reason must be free and public. Enlightenment is thus not merely the process by which individuals would see their own personal freedom of thought guaranteed. There is Enlightenment when the universal, the free, and the public uses of reason are superimposed on one another (Foucault 1984: 36-37).

Thus for Kant, to reach our political maturity as human beings we should “dare to know” or sapere aude, that is, “to have courage to use your own reasoning” (Kant 1991: 54). That is the challenge is for us to rise to the challenge issued by Foucault to think in terms of the ‘historical ontology of ourselves’. Which enables us to further test contemporary reality to find “change points”, and what might the implications be for an investigation of events by which we constitute ourselves as subjects? Indeed, Foucault further argues,

Michel Foucault

I do not know whether we will ever reach mature adulthood. Many things in our experience convince us that the historical event of the Enlightenment did not make us mature adults, and we have not reached that stage yet. However, it seems to me that a meaning can be attributed to that critical interrogation on the present and on ourselves which Kant formulated by reflecting on the Enlightenment. It seems to me that Kant’s reflection is even a way of philosophizing that has not been without its importance or effectiveness during the last two centuries. The critical ontology of ourselves has to be considered not, certainly, as a theory, a doctrine, nor even as a permanent body of knowledge that is accumulating; it has to be conceived as an attitude, an ethos, a philosophical life in which the critique of what we are is at one and the same time the historical analysis of the limits that are imposed on us and an experiment with the possibility of going beyond them (Foucault 1984: 49).

One way forward might be to begin to map the exhaustion of a new generation entelechy in terms of a new political economy that is emerging in terms of the ability to exhaust us of our thoughts, movements, health, thoughts, life, practices, etc. That is, usefully captured in terms of the term of the art in technical circles of the “data exhaust” that all user of computational systems create. We might therefore think in terms of the computational imaginaries that are crystallised within particular generation entelechies – and how we might gather a critical purchase on them. In other words the generation entelechy connected to a particular computational Weltanschauung, or worldview – what I call computationality elsewhere (Berry 2011).

This is to move away from a concern with mere competences of a new generation entelechy and widen the focus on the critical and reflexive abilities of a new generation and how they might be actuated. That is, rather than teach computer programming as a skill for a new economy, instead explore the historical, philosophical, theoretical and critical context for particular kinds of the various forms of digital making. One way of doing this might be to look at concrete case studies of actual programming sites and projects, in order to understand why and how these forms of activity are related, and the context in which they have developed and their trajectories, a research project that has recently begun to be closely associated with critical strands in software studies, for example (Berry 2011).

This is a critical means of contributing to the importance of the project of making the invisibility of much of the digital infrastructures become visible and available to critique. Of course, understanding digital technology is a “hard” problem for the humanities, liberal arts and social sciences due to the extremely complex forms which contain agentic functions and normative (but often hidden) values. Indeed, we might contemplate the curious problem that as the digital increasingly structures the contemporary world, curiously, it also withdraws, becomes backgrounded (Berry 2011). This enables us to explore how knowledge is transformed when mediated through code and software and apply critical approaches to big data, visualisation, digital methods, digital humanities, and so forth. But crucially to also see this in relation to the crystallisation of new entelechies around digital technologies.

Thinking about knowledge in this way enables us to explore generational epistemological changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks, what Mitcham calls a ‘new ecology of artifice’ (Mitcham 1998: 43). Indeed, the proliferation of contrivances that are computationally based is truly breathtaking, and each year we are given statistics that demonstrate how profound the new computational world is. For example, in 2012, 427 million Europeans (or 65 percent) use the internet and more than 90% of European internet users read news online (Wauters 2012). These computational devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects, and usage have to be subject to the kind of critical reasoning that both Kant and Foucault called for.

This is nonetheless made much more difficult by both the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful, and less power-hungry, and by the increase in complexity, power, range, and intelligence of the software that powers them. Of course, we should also be attentive to the over-sharing or excessive and often invisible collection of data within these device ecologies that are outside of the control of the user to ‘redact themselves’, as represented by the recent revelation of the “Path” and “Hipster” apps that were automatically harvesting user address book data on mobile phones (BBC 2012).

We might consider these transformations in light of what Eric Schmitt, ex-CEO of Google called “augmented humanity”. He described this as a number of movements within the capabilities of contemporary computational systems, such that at Google, “we know roughly who you are, roughly what you care about, roughly who your friends are…Google also knows, to within a foot, roughly where you are… I actually think most people don’t want Google to answer their questions… They want Google to tell them what they should be doing next” (Eaton 2010). Translated this means that Google believes that it knows better than the user what it is that they should be doing, and in many cases even thinking. Thus, the computational device the user holds contains the means to source the expertise to prescribe action in particular contexts, what we might call “context consumerism”. That is, that the user is able to purchase their cognitive/memory/expertise capabilities as required on-the-fly. Thus, humanity becomes what we might call, following the development of programming languages such as C++, a new augmented or extended humanity++. Indeed there are now a number of examples of these developments in relation to, for example, Google Glass, contextual UX, locative technologies, etc.

Bernard Stiegler
We might consider the entire technical and media industries in light of what Stiegler (2010) has called the “Programming Industries” which are involved in  creating institutionalized “context”. This is data collected from the tacit knowledge of users and their “data exhaust” and delegated to computer code/software. These algorithms then create “applied knowledge” and are capable of making “judgments” in specific use cases. Indeed, today people rarely use raw data – they consume it in processed form, using computers to aggregate or simplify the results. This means that increasingly the “interface” to computation is “visualised” through computational/information aesthetics techniques and visualisation, a software veil that hides the “making” of the digital computations involved. Indeed, today we see this increasingly being combined with realtime contextual sensors, history and so forth into “cards” and other push notification systems that create forms of just-in-time memory/cognitive processes.
These are new forms of invisible interface/ ubiquitous computing/ enchanted objects which use context to present user with predictive media and information in real-time. The aim, we might say, is to replace forethought by reconfiguring/replacing human “secondary memory” and thinking with computation. That is, the crucial half-second of pre-conscious decision-forming processes whereby we literally “make up our own minds” is today subject to the unregulated and aggressive targeting of the programming industry. This temporally located area of the processes of mind we might call the “enlightenment moment” as it is the fraction of a second that creates the condition of possibility for independent thought and reflexivity itself. Indeed, far from being science-fiction this is now the site of new technologies in the process of being constructed, current examples including: Google Now, Apple Siri, MindMeld, Tempo, etc. Not to mention the aggressive targeting by advertising companies of this area, but more worryingly of new generation entelechies who are still developing their critical or reflexive skills, such as children and young people. This, of course, raises important questions about whether these targeted computation systems and contextual processes should be regulated in law in relation to the young. These are not new issues in relation to the regulation of the minds of children, but the aggressiveness of computational devices and the targeting of this forethought by the programming industries raises the stakes further, indeed as Stiegler quotes,

after decades of struggle in civil society, governments have been forced to regulate air pollution, food and water,… few governments have shown themselves capable of regulating marketing practices targetting children. This situation has left industry free to decide what children watch on television, what products they are offered in order to distract them, what strategies can be used to manipulate their wishes, desires, and values (Brodeur, quoted in Stiegler 2010: 88)

For example, in the UK, with the turn to a competitive model of higher education, literally each university also begins to compete for an “audience” of students to take its courses, and for which the students now pay a considerable sum of money to be both educated and entertained. We could say that the universities become, in effect, another branch of the cultural industry. This represents a dangerous moment for the creation of critical attention, the possibility of reflexivity and enlightenment, in as much as increasingly students receive from the lecturer but do not need to participate; they await their educational portions which are easy to swallow, and happily regurgitate them in their assessments. The students are taught that they are the consumers of a product (rather than the product of education themselves as reflexive citizens in majority), and that this service industry, the university, is there to please them, to satisfy their requirements. How could this be otherwise when they are expected to fill in survey after survey, market research questions to determine how “satisfied” they are, how “happy”, and “content” they are with their consumption. Which remains, finally, the delivery of the best possible product, the first class degree, the A marks, the final certificate covered in gilt which will deliver them the best paying job possible.  The university itself becomes a device, an interface between consumer and producer, and which too becomes highly technologised as it seeks to present a surface commodity layer to its consuming students. It is in this context that MOOCs (Massive Open Online Courses) should be understood and critiqued as they represent only the public face of changes taking place on the inside of universities at all levels.

The new imaginaries of highly invasive congnitve technologies are already being conceptualised as the “Age of Context” within the programming industries. Indeed, under this notion all material objects are carriers of signal to be collected and collated into computational systems, even the discarded, the trash, etc. contains RFID chips that can provide data for contextual systems. But more importantly, the phones we carry, the mobile computers and the tablets, now built with a number of computational sensors, such as GPS, compasses, gyroscopes, microphones, cameras, wifi, radio transmitters and so forth, enable a form of contextual awareness to be technically generated through massive real-time flows of data. For example, in the US Presidential election on 6/11/2012, Twitter recorded 31 million election-related Tweets from users of the streaming –  327,452 Tweets per minute (TPM) (Twitter 2012) all of which can be fed to the user. In a real-time stream ecology, such as Twitter, the notion of the human is already contested and challenged by a form of “hyper attention” in contrast to the ‘deep attention’ of previous ages. Indeed, the user is constantly bombarded with data. This is increasingly understood as a lack within human capabilities which needs to be remedied using yet more technology – real-time streams need visualisation, cognitive assistants, push notification, dashboard interfaces, and so forth.

Google Now and the Notification “Cards”

This much heralded “Age of Context” is being built upon the conditions of possibility made feasible by distributed computing, cloud services, smart devices, sensors, and new programming practices around mobile technologies. This new paradigm of anticipatory computing stresses the importance of connecting up multiple technologies that provide data from real-time streams and APIs (Application Programming Interfaces) to enable a new kind of intelligence within these technical devices. A good example of this is given by Google’s new “Google Now” product, which attempts to think “ahead” of the user by providing algorithmic prediction based on past user behavior, preferences, Google search result history, smart device sensors, geolocation, and so forth. As Google explains,

Google Now gets you just the right information at just the right time. It tells you today’s weather before you start your day, how much traffic to expect before you leave for work, when the next train will arrive as you’re standing on the platform, or your favorite team’s score while they’re playing. And the best part? All of this happens automatically. Cards appear throughout the day at the moment you need them (Google 2012).

These new technologies form a constellation that creates new products and services, new tastes and desires, and the ability to make an intervention into forethought – to produce the imaginary that Google names “Augmented Humanity” (Eaton 2011). In some senses this follows from the idea that after ‘human consciousness has been put under the microscope, [it has been] exposed mercilessly for the poor thing it is: a transitory and fleeting phenomenon’ (Donald, quoted in Thrift 2006: 284). The idea of augmented humanity and contextual computing are intended to remedy this ‘problem’ in human cognitive ability. Here the technologists are aware that they need to tread carefully as Eric Schmidt, Google’s ex-CEO, revealed “Google policy is to get right up to the creepy line and not cross it” (Schmidt, quoted in Richmond 2010). The “creepy line” is the point at which the public and politicians think a line has been crossed into surveillance, control, and manipulation, by capitalist corporations – of course, internally Google’s experimentation with these technologies is potentially much more radical and invasive. These new technologies need not be as dangerous as they might seem at first glance, and there is no doubt that the contextual computing paradigm can be extremely useful for users in their busy lives – acting more like a personal assistant than a secret policeman. Indeed, Shel Israel argues that this new ‘Age of Context’ is an exciting new augmented world made possible by the confluence of a number of competing technologies. He writes that contextual computing is built on,

[1] social media, [2] really smart mobile devices, [3] sensors, [4] Big Data and [5] mapping. [Such that] the confluence of these five forces creates a perfect storm whose sum is far greater than any one of the parts (Israel 2012).

These technologies are built on complex intertwined webs of software tie together these new meta-systems which abstract (are built) from:

  • the social layer, such as Twitter and Facebook,
  • the ambient data collection layer, using the sensors in mobile devices,
  • the web layer, the existing and future web content and technologies,
  • the notification layer, enabling reconciliation and unification of real-time streams,
  • the app layer, which is predominantly made up of single-function apps, 

These various layers are then loosely coupled to interoperate in unexpected but “delightful” perceived fashion, such as experienced with the conversation interfaces, such as Apple Siri, which have both an element of “understanding”, but also contextual information about their environment. Critically engaging with this age of context is difficult due to the distributed software, material objects, “enchanted” objects, black-boxed systems and computational “things” that make it up. The threads that hold these systems together are not well understood as a totality nor their new calculative dashboards (e.g. notification interfaces). Indeed, we can already discern new forms of power that are tentatively visible in this new context layer, enabling new political economic actors, and a new form of intensive exploitation, such as that demonstrated by the intensification of the pre-cognitive moment discussed earlier.

Iconic New Aesthetic Image from Google Earth

I have argued previously that moments like the “new aesthetic”, glitches (Berry 2011, 2012a, 2012b, 2012c), and with others that exceptions and contextual failure are useful to begin mapping these new systems (Berry et al 2012a; Berry et al 2013). The black box of these exhaustive systems is spun around us in wireless radio networks and RFID webs – perhaps doubly invisible. We need to critique moments in exhaustive media that are connected to particular forms of what we might call “exhaustive” governmentality, self-monitoring and life-hacking practices, aesthetic, political, social, economic, etc. but also the way in which they shape the generational entelechies. For example, this could be through the creation of an affective relation with real-time streaming ecologies and a messianic mode of media. Indeed, we might say that anticipatory computing creates a new anticipatory subject, which elsewhere I have called a riparian citizen (Berry 2011: 144).

Indeed, it seems to me that mapping how computation contributes to new generational entelechies and functions to limit their ability to critically reflect on their own historical dimension of the social process is a crucial problem, for example where hegemonic rhetorics of the digital – “new aesthetic”, “pixels”, “sound waves” and so forth, are widely used to convince and seldom challenged. Indeed this contributes to a wider discussion of how medial changes create epistemic changes. For me, this project remains linked to a critical ontology of ourselves as ethos, a critical philosophical life and the historical analysis of imposed limits to reach towards experiments with going beyond current conditions and limits (Foucault 1984). Indeed, the possibility of a “digital” enlightenment ethos needs to be translated into coherent “labor of diverse inquiries”, one of which is the urgent focus on the challenge to thinking represented by the intensification of the programming industries on the “enlightenment moment” of our prethought. This requires methodological approaches, which could certainly draw on the archeological and genealogical analysis of practices suggested by Foucault (1984) but also on the technological and strategic practices associated with shaping both the policies and concrete technologies themselves – perhaps, if not necessarily “Evil Media” (Fuller and Goffey 2012), then certainly critical software and political praxis. Last, and not least, is the theoretical moment required in developing the conceptual and critical means of defining unique forms of relations to things, others, ourselves (Foucault 1984) that are not limited by the frame of computationality.


BBC (2012) iPhone Apps Path and Hipster Offer Address-book Apology, BBC, 9 February 2012,

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave.

Berry, D. M. (2012a) Abduction Aesthetic: Computationality and the New Aesthetic, Stunlaw, accessed 18/04/2012,

Berry, D. M. (2012b) Computationality and the New Aesthetic, Imperica, accessed 18/04/2012.
Berry, D. M. (2012c) Understanding Digital Humanities, London: Palgrave.

Berry, D. M., Dartel, M. v., Dieter, M., Kasprzak, M. Muller, N., O’Reilly, R., and Vicente, J. L (2012) New Aesthetic, New Anxieties, Amsterdam: V2 Press

Berry, D. M., Dieter, M., Gottlieb, B., and Voropai, L. (2013) Imaginary Museums, Computationality & the New Aesthetic, BWPWAP, Berlin: Transmediale.

Eaton, K. (2012) The Future According to Schmidt: “Augmented Humanity,” Integrated into Google, Fast Company, 25 January 2011, augmented-humanity-integrated-google

Foucault, M. (1984) What is Enlightenment?, in Rabinow P. (ed.) The Foucault Reader, New York, Pantheon Books, pp. 32-50.

Fuller, M. and Goffey, A. (2012) Evil Media, MIT Press.

Google (2012) Google Now, Google, 2012,

Israel, S. (2012) Age of Context: Really Smart Mobile Devices, Forbes, 5 September 2012,

Kant, I (1991) An Answer to the Question: What is Enlightenment?, in Kant: Political Writings, Cambridge: Cambridge University Press.

Mannheim, K. (1952) The Problem of Generations,  in Kecskemeti, P. (ed.) Karl Mannheim: Essays, London: Routledge, pp 276-322, accessed 15/02/2013,

Mannheim, K. (1967) Ideology and Utopia, London: Harvest.

Mitcham, C. (1998) The Importance of Philosophy to Engineering’, Teorema, Vol. XVII/3 (Autumn, 1998).

Richmond, S. (2010) Eric Schmidt: Google Gets Close to ‘the Creepy Line’, The Telegraph, 5 October 2010, close-to-the-creepy-line/

Stiegler, B. (2010) For a New Critique of Political Economy, Cambridge: Polity Press.

Stiegler, B. (2010) Taking Care of Youth and the Generations, Cambridge: Polity Press.

Thrift, N. (2006) Re-inventing Invention: New Tendencies in Capitalist Commodification, Economy and Society, 35.2 (May, 2006): 284.

Wauters, R. (2012) 427 Million Europeans are Now Online, 37% Uses More than One Device: IAB, The Next Web, 31 May 2012, online-37-uses-more-than-one-device-iab/

Wiedemann, C. and Zehle, S. (2012) Depletion Design: A Glossary of Network Ecologies, Amsterdam: Institute for Network Cultures

The Author Signal: Nietzsche’s Typewriter and Medium Theory

Malling-Hansen Writing Ball

One of the more poignant moments in Nietzsche’s long and tormented career was when the catalogue of his many ailments, both mental and physical, started to include encroaching blindness. To remedy that he turned to experimentation with the (very primitive) typewriters of the time in 1882 – a Malling-Hansen Writing Ball. This was a major crisis in his writing as he had to accustom himself to what must have seemed almost an entirely new medium and led him to confess that “our writing tools are also working on our thoughts” (quoted in Kittler 1999). Nietzsche, who had dreamed of a machine that would transcribe his thoughts, choose the machine whose “rounded keyboard could be used exclusively through the sense of touch because on the surface of the sphere each spot is designated with complete certainty by its spatial position” (Kittler 1992: 193). Indeed, as Carr (2008) argues “once he had mastered touch-typing [with the new typewriter], he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page.” The condition of possibility created by a particular medium forms an important part of the theoretical foundations of medium theory, which questions the way in which medial changes lead to epistemic changes. This has become an important area of inquiry in relation to the differences introduced by computation and digital media, more generally (see Berry 2011). Indeed, in Nietzsche’s case,

One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language often depend on the quality of pen and paper.”… “You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts” (Carr 2008).

Stylistics and perhaps above all its younger and computerized daughter, stylometry, have already attempted to find stylistic or stylometrical traces (the “author signal”) of similar changes in writing practices by authors – with little positive result. The case of Henry James’s move from handwriting (typewriting) to dictation in the middle of What Maisie Knew has been studied by Hoover (2009). Yet, according to the NYU professor, the author of The Ambassadors took this sudden change in his stride and, despite the fact that we know exactly where the switch occurred, stylometry has been helpless in this case; or, rather, can show no sudden shift in James’s stylistic evolution that continues throughout his career (Hoover 2009). In a way, a similar problem was addressed by Le, Lancashire, Hirst and Jokel (2011) in their study of possible symptoms of Alzheimer’s disease in Agatha Christie word usage and to confirm the same diagnosis in Iris Murdoch. From another perspective, many studies exist on various authors’ switch from handwriting or typing to word processing (see also Lev Manovich’s [2008] work on cultural analytics).

Letter from Friedrich Nietzsche to 
Heinrich Köselitz, Geneva, Feb 17, 1882. 
Earliest typewriter-written text by
 Nietzsche still in existence.

Nietzsche’s case seemed somewhat more promising as his attempts at typewriting were not only commented on by him but also made at a very early stage of mechanical text production – and at the overlap between discourse networks (Kittler 1992: 193). Although Nietzsche is thought to have only used the typewriter for a short period during 1882, an experiment claimed to have lasted either weeks (Kittler 1992), or up to a couple of months (Kittler 1999: 206) – although Günzel and Schmidt-Grépály (2002) more concretely state he typed between February to March 1882 when Nietzsche was also finishing The Gay Science. In fact, Nietzsche produced a collection of typed works he titled 500 Aufschriften auf Tisch und Wand: Für Narrn von Narrenhand. Nietzsche himself commented, “after a week [of typewriting practice,] the eyes no longer have to do their work” (Kittler 1999: 202).[1] Indeed, the technological shock may have been much stronger here than in the case of James or of authors who, some twenty years ago, enthusiastically exchanged white correction fluid for the word-processor delete button and cut-and-paste. Using the typewriter, Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style” (Kittler 1999: 203). Indeed, Kittler argues that,

Neitzsche’s reasons for purchasing a typewriter were very different from those of his colleagues who wrote for entertainment purposes, such as Twain, Lindau, Amytor, Hart, Nansen, and so on. They all counted on increased speed and textual mass production; the half-blind, by contrast, turned from philosophy to literature, from rereasing to a pure, blind, and intransitive act of writing (Kittler 1999: 206). 

In other words, the inscription technologies of Nietzsche’s time have contributed to his thinking. Nevertheless for Nietzsche the typewriter was “more difficult than the piano, and long sentences were not much of an option” (Emden 2005: 29). Although after his failed experimentation with the typewriter, he remained enthralled by its possibilities – “the assumed immediacy of the written word… seemingly connected in a direct way to the thoughts and ideas of the author through the physical movement of the hand… was displaced by the flow of disconnected letters on a page, one as standardized as another” (Emden 2005: 29).

The turning point for Kittler (1999) is represented by The Genealogy of Morals which was written in 1887 – by now Nietzsche was forced by continued poor vision to use secretaries to record his words. Here, it is argued that Nietzsche elevated the typewriter itself to the “status of a philosophy,” suggesting that “humanity had shifted away from its inborn faculties (such as knowledge, speech, and virtuous action) in favor of a memory machine. [When] crouched over his me­chanically defective writing ball, the physiologically defective philosopher [had] realize[d] that ‘writing . . . is no longer a natural extension of humans who bring forth their voice, soul, individuality through their handwriting. On the contrary, . . . humans change their position – they turn from the agency of writing to become an inscription surface'” (Winthrop-Young and Wutz 1999: xxix).

In the very tentative analysis presented here (and which must be redone with a greater collection of Nietzsche’s works), the standard stylometric procedure of comparing normalized word frequencies of the most frequent words in the corpus was applied by means of the “stylo” (ver. 0-4-7) script for the R statistical programming environment (Eder and Rybicki 2011).

The script converts the electronic texts to produce complete most-frequent-word (MFW) frequency lists, calculates their z-scores in each text according to the Delta procedure (Burrows 2002); uses the top frequency lists for analysis; performs additional procedures for better accuracy (including Hoover’s culling, the removal of all words that do not appear in all the texts for better independence of content); compares the results for individual texts; produces Cluster Analysis tree diagrams that show the distances between the texts; and, finally, combines the tree diagrams made for various parameters (number of words used in each individual analysis) in a bootstrap consensus tree (Dunn et al. 2005, quoted in Baayen 2008: 143-147). The script, in its ever-evolving versions, is available online (Eder, Rybicki and Kestemont 2012). The consensus tree approach, based as it is on numerous iterations of attribution tests at varying parameters, has already shown itself as a viable alternative to single-iteration analyses (Rybicki 2012, Eder and Rybicki 2012).

The first analysis was performed for complete texts of six works by Nietzsche:  Die Geburt der Tragödie (1872) and Menschliches, Allzumenschliches (1878), both written before 1879, his “year of blindness,” and his typewriter experiments of 1882, and Also sprach Zarathustra (1883-5), Jenseits von Gut und Böse (1886), Ecce homo and Götzen-Dämmerung (1888). The resulting graph suggest a chronological evolution of Nietzschean style as the early works cluster to the right, and the later ones to the left of Figure 1.

Figure 1, chronological evolution of Nietzschean style

Yet the pattern above shares the usual problem of multivariate graphs for just a few texts: a possibility of randomness in the order of clusters. This is why it makes sense to perform another analysis, this time on the above texts divided into equal-sized segments (10,000 words is usually safe). Figure 2 confirms the chronological evolution pattern as the segments of each individual book are correctly clustered together. What is more, the previous result is corroborated by a very similar pattern in terms of creation date.
Figure 2, chronological evolution pattern as segments of each individual book are clustered together
As has been said above, a greater number of texts is needed to confirm these initial findings. There is indeed a clear division of Nitzschean style into early and late(r). Whether this is a repetition of a phenomenon observed in many other writers (Henry James, for one), or a direct impact of technological change and therefore a confirmation of the claims of medium theory, remains to be investigated. Nonetheless, this approach offers an additional method to explore how medial change can be mapped in relation to changes in knowledge. It also offers a potential means for exploring the way in which contemporary debates over the introduction of computational and digital means of creating, storing and distributing knowledge affect the way in which authorship itself is undertaken. 
This doesn’t just have to be strictly between mediums, and there is potential for exploring intra-medial change and the way in which writing has been influenced by the long dark ages of Microsoft Word as the hegemonic form of digital writing (1983-2012), and which gradually appears to be coming to an end in the age of locative media, apps, and real-time streams. Indeed, with exploratory digital literature forms, represented in ebooks, computational document format (CDF) and apps, such as Tapestry, which allow the creation of “tap essays” (Gannes 2012), new ways of authoring and presenting knowledge are suggested. Only a short perusal of Apple iBooks Author, for example, shows the way in which the paper forms underlying the digital writings of the 20th Century, are giving way to new ways of writing and structuring text within the framework of a truly digital medium made possible through tablet computers, smart phones and the emerging “tabs, pads and boards” three-screen world
With digital forms, new ways of presenting and storing knowledge are also constructed, not just the relational database, but also object-oriented, graph and other forms, and which people are increasingly familiar with as modes of practice in relation to manipulating knowledge. How this will change the writing of future literature remains to be seen, but Kittler clearly foresaw an important turn in the way in which we should research and understand these processes, writing,

To put it plainly: in contrast to certain collegues in media studies, who first wrote about French novels before discovering French cinema and thus only see the task before them today as publishing one book after another about the theory and practice of literary adaptations… In contrast to such cheap modernizations of the philological craft, it is important to understand which historical forms of literature created the conditions that enabled their adaptation in the first place. Without such a concept, it remains inexplicable why certain novels by Alexandre Dumas, like The Three Musketeers, have been adapted for film hundreds of times, while old European literature, from Ovid’s Metamorphoses to weighty baroque tomes, were simple non-starters for film… It is possible… to conclude from the visually hallucinatory ability that literature acquired around 1800 that a historically changed mode of perception had entered everyday life. As we know, after a preliminary shock Europeans and North Americans learned very quickly and easily how to decode film sequences. They realized that film edits did not represent breaks in the narrative and that close-ups did not represent heads severed from bodies. (Kittler 2009: 108)

Equally, today in a world filled with everyday computational media, Europeans and North Americans are learning very quickly to adapt to the real-time streaming media of the 21st Century. We are no longer surprised when live television is paused to make a drink, or our mobile phone tells us that we are running late for a meeting and offers us a quicker route to get to the location. Nor are we perplexed by multiple screens, screens within screens, transmedia storytelling, social media, or even contextual navigation and adaptive user interfaces. Thus new social epistemologies are emerging in relation to computational media, that is, “the conditions under which groups of agents (from generations to societies) acquire, distribute, maintain and update (claims to) belief and knowledge [has changed] through the active mediation of code/software” (Berry 2012: 380). Again, a historically changed mode of perception has entered everyday life, and which we can explore through its traces in cultural artefacts, such as literature, film, television, software and so forth. 
With the suggestive analysis offered in this short article, we hope to have demonstrated how computational approaches can create research questions in relation to medium theory, and which although not necessary offering conclusive results, nonetheless press us to explore further the links between medial and epistemic change. 
David M. Berry and Jan Rybicki


[1] According to Günzel and Schmidt-Grépály (2002), Nietzsche typed 15 letters, 1 postcard and 34 bulk sheets (including some poems and verdicts) with his ‘Schreibkugel‘ from Malling-Hansen in 1882.

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave.

Berry, D. M. (2012) The Social Epistemologies of Software, Social Epistemology: A Journal of Knowledge, Culture and Policy, 26:3-4, 379-398

Burrows, J.F. (2002) “Delta: A Measure of Stylistic Difference and a Guide to Likely Authorship,” Literary and Linguistic Computing 17: 267-287.
Carr, N. (2008) Is Google Making Us Stupid?, The Atlantic, accessed 19/12/2012,
Dunn, M., Terrill, A., Reesink, G., Foley, R.A. and Levinson, S.C. (2005) “Structural Phylogenetics and the Reconstruction of Ancient Language History,” Science 309: 2072-2075. Quoted in Baayen, R.H. (2008) Analyzing Linguistic Data. A Practical Introduction to Statistics using R, Cambridge: Cambridge University Press.
Eder, M. and Rybicki, J. (2011). Stylometry with R. Stanford: Digital Humanities 2011.
Eder, M. Rybicki, J., and Kestemont, M. (2012). Computational Stylistics, accessed 19/12/2012,
Eder, M. and Rybicki, J. (2012). “Do Birds of a Feather Really Flock Together, or How to Choose Test Samples for Authorship Attribution,” Literary and Linguistic Computing, First published online August 11, 2012: 10.1093/llc/fqs036.
Emden, C. (2005) Nietzsche On Language, Consciousness, And The Body, University of Illinois Press.

Gannes, L. (2012) When an App Is an Essay Is an App: Tapestry by Betaworks , Wall Street Journal
Günzel, S. and Schmidt-Grépály, R. (2002) (eds.) Friedrich Nietzsche. Schreibmaschinentexte, 2nd edition, Weimar: Verlag der Bauhaus Universität, accessed 19/12/2012,

Kittler, F. A. (1992) Discourse Networks, 1800/1900, Stanford University Press.
Kittler, F. A. (1999) Gramophone, Film, Typewriter translated by Geoffrey Winthrop-Young and Michael Wutz, Stanford: Standford University Press, 200-208, quoted in Patricia Falguières, “A Failed Love Affair with the Typewriter”, rosa b,  accessed 19/12/2012, 
Kittler, F. A. (2009) Optical Media, London: Polity Press. 
Hoover, David L. (2009) “Modes of Composition in Henry James: Dictation, Style, and What Maisie Knew,” Digital Humanities 2009, University of Maryland, June 22-25.
Le, X., Lancashire, I., Hirst, G., and Jokel, R. (2011) “Longitudinal detection of dementia through lexical and syntactic changes in writing: a case study of three British novelists,” Literary and Linguistic Computing, 26(4): 435-461
Manovich, L. (2008) Cultural Analytics, accessed 19/12/2012,
Rybicki, J. (2012) “The Great Mystery of the (Almost) Invisible Translator: Stylometry in Translation.” In Oakes, M., Ji, M. (eds). Quantitative Methods in Corpus-Based Translation Studies, Amsterdam: John Benjamins.
Winthrop-Young, G. and Wutz, M. (1999) Translators’ Introduction, in Kittler, F. A., Gramophone, Film, Typewriter, Standford University Press.

Against Remediation

A new aesthetic through Google Maps

In contemporary life, the social is a site for a particular form of technological focus and intensification. Traditional social experience has, of course, taken part in various forms of technical mediation, formatting and subject to control technologies. Think, for example, of the way in which the telephone structured the conversation, diminishing the value of proximity, whilst simultaneously intensifying certain kinds of bodily response and language use. It is important, then to trace media genealogies carefully and to be aware of the previous ways in which the technological and social have met – and this includes the missteps, mistakes, dead-ends, and dead media. This understanding of media, however, has increasingly been understood in terms of the notion of remediation, which has been thought to helpfully contribute to our thought about media change, whilst sustaining a notion of medium specificity. Bolter and Grusin (2000), who coined its contemporary usage, state,

[W]e call the representation of one medium in another remediation, and we will argue that remediation is a defining characteristic of the new digital media. What might seem at first to be an esoteric practice is so widespread that we can identify a spectrum of different ways in which digital media remediate their predecessors, a spectrum depending on the degree of perceived competition or rivalry between the new media and the old (Bolter and Grusin 2000: 45).

However, it seems to me that we now need to move beyond talk of the remediation of previous modes of technological experience and media, particularly when we attempt to understand computational media. I think that this is important for a number of reasons, both theoretical and empirical. Firstly, in a theoretical vein, the concept of remediation has become a hegemonic concept and as such has lost its theoretical force and value. Remediation traces its intuition from McLuhan’s notion that the content of a new media is an old media – McLuhan actually thought of “retrieval” as a “law” of media. But it seems to me that beyond a fairly banal point, this move has the effect of both desensitising us to the specificity and materiality of a “new” media, and more problematically, resurrecting a form of media hauntology, in as much as the old media concepts “possess” the new media form. Whilst it might have held some truth for the old “new” media, although even here I am somewhat sceptical, within the context of digital, and more particularly computational media, I think the notion is increasingly unhelpful. Secondly, remediation gestures toward a depth model of media forms, within which it encourages a kind of originary media, origo, to be postulated, or even to remain latent as an a priori. This enables a form of reading of the computational which justifies a disavowal of the digital, through a double movement of simultaneously exclaiming the newness of computational media, whilst hypostatizing a previous media form “within” the computational.

Thirdly, I do not believe that it accurately describe the empirical situation of computational media, and in fact obfuscates the specificity of the computational in relation to its structure and form. This has a secondary effect in as much as analysis of computational media is viewed through a lens, or method, that is legitimated through this prior claim to remediation. Fourthly, I think remediation draws its force through a reliance on an occularity, that is, remediation is implicitly visual in its conceptualisation of media forms, and the way in which one media contains another, relies on a deeply visual metaphor. This is significant in relation to the hegemony of the visual form of media in the twentieth century. Lastly, and for this reason, I think it is time for us to historicize the concept of remediation. Indeed remediation seems to me to be a concept appropriate to the technologies of media of the twentieth century, and shaped by the historical context of thinking about media in relation to the materialities of those prior media forms and the constellation of concepts which appeared appropriate to them. We need to think computational media in terms which de-emphasize, or certainly reduce the background assumptions to remediation as something akin to a looking glass, and think in terms of a medium as an agency or means of doing something – this means thinking beyond the screenic.

So in this paper, in contrast to talk about “remediation”, and in the context of computational media, I want to think about de-mediation, that is, when a media form is no longer dominant, becoming marginal, and later absorbed/reconstructed in a new medium which en-mediates it. By en-mediate I want to draw attention to the securing of the boundaries related to a format, that is a representation, or mimesis of a previous media – but it is not the “same”, nor is it “contained” in the new media. This distinction is important as at the moment of enmediation, computational categories and techniques transform the newly enmediated form – I am thinking here, for example, of the examples given by the new aesthetic and related computational aesthetics. By enmediate I want to draw links with Heidegger’s notion of enframing (Gestell) and the structuring providing by a condition of possibility, that is a historical constellation of concepts.  I also want to highlight the processual computational nature of en-mediation, in other words, enmediation requires constant work to stabilize the enmediated media. In this sense, computational media is deeply related to enmediation as a total process of mediation through digital technologies. One way of thinking about enmediation is to understand it as gesturing towards a notion of a paradigmatic shift in the way in which “to mediate” should be understood, and which does not relate to the “passing through”, or “informational transfer” as such, but rather enmediate, in this discussion, aims to enumerate and uncover the specificity of computational mediation as mechanic processing.

I therefore want to move quickly to thinking about what it means to enmediate the social. By the term “social” I am particularly thinking in terms of the meditational foundations for sociality that were made available in twentieth century media, and which when enmediated become something new. So sociality is not remediated, it is enmediated – that is the computational mediation of society is not the same as the mediation processes of broadcast media, rather it has a specificity that is occluded if we rely on the concept of remediation to understand it. Thus, it is not an originary form of sociality that is somehow encoded within media (or even constructed/co-constructed), and which is re-presented in the multiple remediations that have occurred historically. Rather it is the enmediation of specific forms of sociality, which in the process of enmediation are themselves transformed, constructed and made possible in a number of different and historically specific modes of existence.

Bolter, J. D. and Grusin, R. (2000) Remediation: Understanding New Media, MIT Press.

Coping Tests as a Method for Software Studies

In this post I want to begin to outline a method for software reading that in some senses can form the basis of a method in software studies more generally. The idea is to use the pragmata of code, combined with its implicit temporality and goal-orientedness to develop an idea of what I call coping tests. This notion draws from the idea developed by Heidegger, as “coping” being a specific means of experiencing that takes account of the at-handiness (zuhandenheit) of equipment (that is entities/things/objects which are being used in action)  – in other words coping tests help us to observe the breakdowns of coded objects. This is useful because it helps us to think about the way software/code is in some senses a project that is not just static text on a screen, but a temporal structure that has a past, a processing present, and a futural orientation to the completion (or not) of a computational task. I want to develop this in contrast to attempts by others to focus on the code through either through a heavily textual approach (and critical code studies tends towards this direction), or else a purely functionality driven approach (which can have idealist implications in some forms, whereby a heavily mathematised approach tends towards a platonic notion of form).

In my previous book, The Philosophy of Software (Berry 2011), I use obfuscated code as a helpful example, not as a case of unreadable reading, or even for the spectacular, rather I use it as a stepping off point to talk about the materiality of code through the notion of software testing. Obfuscated code is code deliberately written to be unreadable to humans but perfectly readable to machines. This can take the form of a number of different approaches, from simply mangling the text (from a human point of view), to using distraction techniques, such as confusing or deliberately mislabeling variables, functions, calls, etc. It can even take the form of aesthetic effects, like drawing obvious patterns, streams, and lines in the code, or forming images through the arrangement of the text.

Testing is a hugely important part of the software lifecycle and links the textual source code to the mechanic software and creates the feedback cycle between the two. This I linked to Callon and Latour (via Boltanski and Thevenot) use of the notion of ‘tests’ (or trials of strength) – implying that it is crucially the running of these obfuscated code programs that shows that they are legitimate code (they call these legitimate tests), rather than nonsense. The fact that they are unreadable by humans and yet testable is very interesting, more so as they become aesthetic objects in themselves as the programmers start to create ASCII art both as a way of making the code (unreadable), now readable as an image, but also adding another semiotic layer to the meaning of the code’s function.

The nature of coping that these tests imply (as trials of strength) combined with the mutability of code is then constrained through limits placed in terms of the testing and structure of the project-orientation. This is also how restrictions are delegated into the code which serve as what Boltanski and Thevenot can then be retested through ‘trials of strength’. The borders of the code are also enforced through tests of strength which define the code qua code, in other words as the required/tested coded object. It is important to note that these also can be reflexively “played with” in terms of clever programming that works at the borderline of acceptability for programming practices (hacking is an obvious example of this).

In other words testing as coping tests can be understood in two different modes, (i) ontic coping tests: which legitimate and approval the functionality and content of the code, in other words that the code is doing what it should, so instrumentally, ethically, etc. But we need to work and think at a number of different levels, of course, from unit testing, application testing, user interface testing, and system testing, more generally in addition to taking account of the context and materialities that serve as conditions of possibility for testing (so this could take the form of a number of approaches, including ethnographies, discursive approaches, etc.).; and (ii) ontological coping tests: which legitimate the code qua code, that it is code at all, for example, authenticating that the code is the code we think it is – we can think of code signing as an example of this, although it has a deeper significance as the quiddity of code. This then has a more philosophical approach towards how we can understand, recognise or agree on the status of code as code and identify underlying ontological structural features, etc.

For critical theory, I think tests are a useful abstraction as an alternative (or in addition to) the close reading of source code. This can be useful in a humanities perspective for teaching some notions of ‘code’ through the idea of ‘iteracy’ for reading code, and will be discussed throughout my new book, Critical Theory and the Digital, in relation to critical readings of software/code opened up through the categories given by critical theory. But this is also extremely important for contemporary critical researchers and student, who require a much firmer grasp of computational principles in order to understand the economy, culture and society which has become softwarized, but also more generally for the humanities today, where some knowledge of computation is becoming required to undertake research.

One of the most interest aspects of this approach, I think, is that it helps sidestep the problems associated with literally reading source code, and the problematic of computational thinking in situ as a programming practice. Coping tests can be developed within a framework of “depth” in as much as different kinds of tests can be performed by different research communities, in some senses this is analogous to a test suite in programming. For example, one might have UI/UX coping tests, functionality coping tests, API tests, forensic tests (linking to Matthew Kirschenbaum’s notion of forensic media), and even archaeological coping tests (drawing from media archaeology, and particularly theorists such as Jussi Parikka) – and here I am thinking both in terms of coping tests written in the present to “test” the “past”, as it were, but also there must be an interesting history of software testing, which could be reconceptualised through this notion of coping tests, both as test scripts (discursive) but also in terms of software programming practice more generally, social ontologies of testing, testing machines, and so forth.[1] We might also think about the possibilities for thinking in terms of social epistemologies of software (drawing on Steve Fuller’s work, for example).

As culture and society are increasingly softwarized, it seems to me that it is very important that critical theory is able to develop concepts in relation to software and code, as the digital. In a later post I hope to lay out a framework for studying software/code through coping tests and a framework/method with case studies (which I am developing with Anders Fagerjord, from IMK, Oslo University).


[1] Perhaps this is the beginning of a method for what we might call software archaeology.