Category Archives: code

Signal Lab

As part of the Sussex Humanities Lab, at the University of Sussex, we are developing a research group clustered around information theoretic themes of signal/noise, signal transmission, sound theorisation, musicisation, simulation/emulation, materiality, game studies theoretic work, behavioural ideologies and interface criticism. The cluster is grouped under the label Signal Lab and we aim to explore the specific manifestations of the mode of existence of technical objects. This is explicitly a critical and political economic confrontation with computation and computational rationalities.

Signal Lab will focus on techno-epistemological questions around the assembly and re-assembley of past media objects, postdigital media and computational sites. This involves both attending to the impressions of the physical hardware (as a form of techne) and the logical and mathematical intelligence resulting from software (as a form of logos). Hence we aim to undertake an exploration of the technological conditions of the sayable and thinkable in culture and how the inversion of reason as rationality calls for the excavation of how techniques, technologies and computational medias direct human and non-human utterances without reducing techniques to mere apparatuses.

This involves the tracing of the contingent emergence of ideas and knowledge in systems in space and time, to understand distinctions between noise and speech, signal and absence, message and meaning. This includes an examination of the use of technical media to create the exclusion of noise as both a technical and political function and the relative importance of chaos and irregularity within the mathematization of chaos itself. It is also a questioning of the removal of the central position of human subjectivity and the development of a new machine-subject in information and data rich societies of control and their attendant political economies.

Within the context of information theoretic questions, we revisit the old chaos, and the return of the fear of, if not aesthetic captivation toward, a purported contemporary gaping meaninglessness. Often associated with a style of nihilism, a lived cynicism and jaded glamour of emptiness or misanthropy. Particularly in relation to a political aesthetic that desires the liquidation of the subject which in the terms of our theoretic approach, creates not only a regression of consciousness but also the regression to real barbarism. That is, data, signal, mathematical noise, information and computationalism conjure the return of fate and the complicity of myth with nature and a concomitant total immaturity of society and a return to a society in which self-relfection can no longer open its eyes, and in which the subject not only does not exist but instead becomes understood as a cloud of data points, a dividual and a undifferentiated data stream.

Signal Lab will therefore pay attention both to the synchronic and diachronic dimensions of computational totality, taking the concrete meaningful whole and essential elements of computational life and culture. This involves the explanation of the emergence of the present given social forces in terms of some past structures and general tendencies of social change. That is, that within a given totality, there is a process of growing conflict among opposite tendencies and forces which constitutes the internal dynamism of a given system and can partly be examined at the level of behaviour and partly at the level of subjective motivation. This is to examine the critical potentiality of signal in relation to the possibility of social forces and their practices and articulations within a given situation and how they can play their part in contemporary history. This potentially opens the door to new social imaginaries and political possibility for emancipatory politics in a digital age.


On Capture

In thinking about the conditions of possibility that make possible the mediated landscape of the post-digital (Berry 2014) it is useful to explore concepts around capture and captivation, particularly as articulated by Rey Chow (2012). Chow argues the being “captivated” is

the sense of being lured and held by an unusual person, event, or spectacle. To be captivated is to be captured by means other than the purely physical, with an effect that is, nonetheless, lived and felt as embodied captivity. The French word captation, referring to a process of deception and inveiglement [or persuade (someone) to do something by means of deception or flattery] by artful means, is suggestive insofar as it pinpoints the elusive yet vital connection between art and the state of being captivated. But the English word “captivation” seems more felicitous, not least because it is semantically suspended between an aggressive move and an affective state, and carries within it the force of the trap in both active and reactive senses, without their being organised necessarily in a hierarchical fashion and collapsed into a single discursive plane (Chow 2012: 48). 

To think about capture then is to think about the mediatized image in relation to reflexivity. For Chow, Walter Benjamin inaugurated a major change in the the conventional logic of capture, from a notion of reality being caught or contained in the copy-image, such as in a repository, the copy-image becomes mobile and this mobility adds to its versatility. The copy-image then supersedes or replaces the original as the main focus, as such this logic of the mechanical reproduction of images undermines hierarchy and introduces a notion of the image as infinitely replicable and extendable.  Thus the “machinic act or event of capture” creates the possibility for further dividing and partitioning, that is for the generation of copies and images, and sets in motion the conditions of possibility of a reality that is structured around the copy.

Chow contrasts capture to the modern notion of “visibility” such that as Foucault argues “full lighting and the eyes of a supervisor capture better than darkness, which ultimately protected. Visibility is a trap” (Foucault 1991: 200). Thus in what might be thought of as the post-digital – a term that Chow doesn’t use but which I think is helpful in thinking about this contrast – what is at stake is no longer this link between visibility and surveillance, indeed nor is the link between becoming-mobile and the technology of images, but rather the collapse of the “time lag” between the world and its capture.

This is when time loses its potential to “become fugitive” or “fossilised” and hence to be anachronistic. The key point being that the very possibility of memory is disrupted when images become instantaneous and therefore synonymous with an actual happening. Thus in a condition of the post-digitial, whereby digital technologies make possible not only the instant capture and replication of an event, but also the very definition of the experience through its mediation both at the moment of capture – such as with the waving smart phones at a music concert or event  – but also in the subsequent recollection and reflection on that experience.

Thus the moment of capture or “arrest” is an event of enclosure, locating and making possible the sharing and distribution of a moment through infinite reproduction and dissemination. So capture represents a techno-social moment but is also discursive in that it is a type of discourse that is derived from the imposition of power on bodies and the attachment of bodies to power. This Chow calls a heteronomy or heteropoiesis, as in a system or artefact designed by humans, with some purpose, but not able to self-reproduce but which is yet able to exert agency in the form of prescription often back onto its designers. Essentially producing an externality in relation to the application of certain “laws” or regulations.

Nonetheless, capture and captivation also constitute a critical response through the possibility of a disconnecting logic and the dynamics of mimesis. This possibility reflected through the notion of entanglements refers to the “derangements in the organisation of knowledge caused by unprecedented adjacency and comparability or parity”. This is, of course, definitional in relation to the notion of computation when itself works through a logic of formatting, configuration, structuring and the application of computational ontologies (Berry 2011, 2014).

Here capture offers the possibility of a form of practice in relation to alienation by making the inquirer adopt a position of criticism, the art of making strange. Chow here is making links to Brecht and Shklovsky, and in particular their respective predilection for estrangement in artistic practice, such as in Brecht’s notion of verfremdung, and thus to show how things work, whilst they are being shown (Chow 2012: 26-28). In this moment of alienation the possibility is thus raised of things being otherwise. This is the art of making strange as a means to disrupt the everyday conventionalism and refresh the perception of the world – art as device. The connections between techniques of capture and critical practice as advocated by Chow, and reading or writing the digital are suggestive in relation to computation more generally, not only in artistic practice but also in terms of critical theory. Indeed, capture could be a useful hinge around which to subject the softwarization practices, infrastructures and experiences of computation to critical thought both in terms of their technical and social operations but also to the extent to which they generate a coercive imperative for humans to live and stay alive under the conditions of a biocomputational regime.


Berry, D. M. (2011) The Philosophy of Software, London: Palgrave.

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury.

Chow, R. (2012) Entanglements, or Transmedial Thinking about Capture, London: Duke University Press.

Foucault, M. (1991) Discipline and Punish, London: Penguin Social Sciences.

Questions from a Worker Who Codes

In relation to the post-digital, it is interesting to ask the question as to the extent to which the computational is both the horizon of, and the gatekeeper to, culture today (Berry 2014a). If code operates as the totalising mediator of culture, if not the condition for such culture, then access to both culture and code should become social, political and aesthetic questions. This is partially bound up with questions of literacy and the scope of such knowledges, usually framed within the context of computational competence within a particular programming language. This question returns again and again in relation to the perceived educative level of a population in order to partake of the commonality shared within a newly post-digital culture – should one code? In other words, to what extent must a citizen be able to read and interact with the inscriptions that are common to a society?  Indeed, in the register of art, for example, Brecht considered the question itself to be superfluous, in as much as providing an opportunity of access and therefore praxis opens the possibility of such experiences and understanding. He writes,

one need not be afraid to produce daring, unusual things for the proletariat so long as they deal with its real situation. There will always be people of culture, connoisseurs of art, who will interject: “Ordinary people do not understand that.” But the people will push these persons impatiently aside and come to a direct understanding with artists (Brecht 2007: 84).

In relation to the practices of code itself, it is, of course, not a panacea for all the ills of society. However, it is on the other hand a competence that increasingly marks itself out as a practice which creates opportunities to interact with and guide ones life in relation to being able to operate, and define how the computational functions in relation to individuation processes (see Stiegler 2013, Cowen 2013, Economist 2014). Not only that, as the epistemic function of code grows in relation to the transformation of previous media forms into a digital substrate, and the associated softwarization of the process, culture is itself transformed and the possibilities for using and accessing that culture change too. Indeed, Bifo argues, without such competences, “the word is drawn into this process of automation, so we find it frozen and abstract in the disempathetic life of a society that has become incapable of solidarity and autonomy” (Berardi 2012: 17). For Berardi, cognitive labour would then have become disempowered and subjected to what he calls “precarization” (Berardi 2012: 141). In response he calls for an “insurrection” in as much as “events” can generate the “activation of solidarity, complicity, and independent collaboration between cognitarians”, that is, “between programmers, hardware technicians, journalists, and artists who all take part in an informational process” (Berardi 2012: 142-3).

The aim of this literacy, if we can call it that, in relation to the computational, and which is similar to what I have called iteracy elsewhere (Berry 2014b), is also connected to notions of reflexivity, critique, and emancipation in relation to the mechanisation of not only labour, but also culture and intellectual activities more generally. Understanding the machine, as it were, creates the opportunity to change it, and to give citizens the capacity to imagine that things might be other than they are.

This is important to avoid a situation whereby the proletarianisation of labour is followed by the capacity of machines to proletarianise intellectual thought itself. That is, that machines define the boundaries of how, as a human being, one must conduct oneself, as revealed by a comment by a worker at a factory in France in the 1960s who commented, that “to eat, in principle, one must be hungry. However, when we eat, it’s not because we’re hungry, it’s because the electronic brain thought that we should eat because of a gap in production” (Stark 2012: 125). Delegation into the machine of the processes of material and intellectual production abstracts the world into a symbolic representation within the processes of machine code. It is a language of disconnection, a language that disables the worker, but simultaneously disables the programmer, or cognitive worker, who no longer sees another human being, but rather an abstract harmony of interacting objects within a computational space – that is, through the application of compute (Berry 2014c). This is, of course, a moment of reification, and as such code and software act as an ideological screen for the activities of capitalism, and the harsh realities of neoliberal restructuring and efficiencies, the endless work,[1] made possible by such softwarization. Indeed, under capital,

time sheds its qualitative, variable, flowing nature; it freezes into an exactly delimited, quantifiable continuum filled with quantifiable ‘things’ (the reified, mechanically objectified ‘performance’ of the worker, wholly separated from his total human personality): in short, it becomes space. In this environ­ment where time is transformed into abstract, exactly measurable, physical space, an environment at once the cause and effect ofthe scientifically and mechanically fragmented and specialised pro­ duction of the object of labour, the subjects of labour must like­ wise be rationally fragmented. On the one hand, the objectifica­tion of their labour-power into something opposed to their total personality (a process already accomplished with the sale of that labour-power as a commodity) is now made into the permanent ineluctable reality of their daily life. Here, too, the personality can do no more than look on helplessly while its own existence is reduced to an isolated particle and fed into an alien system. On the other hand, the mechanical disintegration of the process of production into its components also destroys those bonds that had bound individuals to a community in the days when production was still ‘organic’. In this respect, too, mechanisation makes of them isolated abstract atoms whose work no longer brings them together directly and organically; it becomes mediated to an increasing extent exclusively by the abstract laws of the mechanism which imprisons them (Lukács 1971: 90).

But of course here, it is not seconds and minutes measured in “the pendulum of the clock [that] has become as accurate a measure of the relative activity of two workers as it is of the speed of two locomotives”, but rather the microsecond and millisecond time of code, combined with new forms of sensors and distributed computational devices that measure time. Indeed, “time is everything, [humans are] nothing; they are at the most the incarnation of time. Quality no longer matters. Quantity alone decides everything: hour for hour, day for day” (Marx 1976: 125). For it is in the spaces of such quantification that lies the obfuscation of the the realities of production, but also of the possibility for changing production to a more democratic and humane system that makes, as Stiegler claims, “a life worth living” (Stiegler 2009).[2]


[1] It is interesting to think about the computational imaginary in relation to the notion of “work” that this entails or is coded/delegated into the machine algorithms of our post-digital age. Campagna (2013) has an interesting formulation of this in relation to Newman (2012) has called “nothing less than a new updated Ego and Its Own for our contemporary neoliberal age” (Newman 2012: 93). Indeed, Campagna writes, “westerners had to find a way of adapting this mystical exercise to the structures of contemporary capitalism. What would a mantra look like, in the heart of a global metropolis of the 21st Century? What other act might be able to host its obsessive spirit, whilst functioning like a round, magic shield, covering the frightened believers from their fear of freedom? There was only one possible, almost perfect candidate. The activity of repetition par excellence: Work. The endless chain of gestures and movements that had built the pyramids and dug the mass graves of the past. The seal of a new alliance with all that is divine, which would be able to bind once again the whole of humanity to a new and eternal submission. The act of submission to submission itself. Work. The new, true faith of the future” (Campagna 2013: 10). Here, though I argue that it is not immaterial apparitions and spectres which are haunting humanity and which the Egoist can break free from, but the digital materiality of computers’ abstractions formed of algorithms and code and which are a condition of possibility for individuation and subjectivity itself within cognitive capitalism. 
[2] As Stark writes,  “for a worker to claim the right to create—to theoretically “unalienated” labor—was a gesture as threatening to the factory bosses as it was to the official organs of the left, with their vision of the worker acceding to a state of being-in-oneself through work. Regarding this form of sociological indeterminacy, Rancière argues that “perhaps the truly dangerous classes are . . . the migrants who move at the border between classes, individuals and groups who develop capabilities within themselves which are useless for the improvement of their material lives and which in fact are liable to make them despise material concerns.” Further, for Rancière, “Working- class emancipation was not the affirmation of values specific to the world of labor. It was a rupture in the order of things that founded these ‘values,’ a rupture in the traditional division [partage] assigning the privilege of thought to some and the tasks of production to others.” Binetruy affirms this rupture, recalling that while initially wary of “these Parisians who came stuffed with film and cameras,” he quickly realized that “they did not come to teach us any lessons, but rather to transmit technical training that would liberate our spirits through our eyes. Once you have put your eyes behind a camera, you are no longer the same man, your perspective has changed.”” (Stark 2012: 150).


Berardi, F. (2012) The Uprising: On Poetry and Finance, London: Semiotext(e).

Berry, D. M. (2014a) The Post-Digital, Stunlaw, accessed 14/1/2014,

Berry, D. M. (2014b) Critical Theory and the Digital, New York: Bloomsbury.

Berry, D. M. (2014c) On Compute, Stunlaw, accessed 14/1/2014,

Brecht, B. (2007) Popularity and Realism, in Aesthetics and Politics, London: Verso Press.

Campagna, F. (2013) The Last Night: Anti-Work, Atheism, Adventure, London: Zero Books.

Cowen, T. (2013) Average Is Over: Powering America Beyond the Age of the Great Stagnation, London: Dutton Books.

Economist (2014) Coming to an office near you, The Economist, accessed 16/01/2014,

Lukács, G. (1971) History and Class Consciousness: Studies in Marxist Dialectics, MIT Press.

Marx, K. (1976) The Poverty of Philosophy, in Karl Marx and Frederick Engels, Collected Works, Volume 6, 1845–1848, London: Lawrence & Wishart.

Newman, S. (2013) Afterword, In Campagna, F. (2013) The Last Night: Anti-Work, Atheism, Adventure, London: Zero Books, pp. 92-5.

Stark, T. (2012) “Cinema in the Hands of the People”: Chris Marker, the Medvedkin Group, and the Potential of Militant Film, OCTOBER, 139, Winter 2012, pp. 117–150.

Stiegler, B. (2009) What Makes Life Worth Living: On Pharmacology, Cambridge: Polity Press

On Compute

Today, the condition of possibility for the milieu of contemporary life is compute. That is, compute as the abstract unit of computation, both as dunamis (potentiality) and energeia (actuality), that is as the condition of possibility for the question of the in-itself and the for-itself.  Compute as a concept, exists in two senses, as the potential contained in a computational system, or infrastructure, and in the actuation of that potential in actual work, as such. Whilst always already a theoretical limit, compute is also the material that may be brought to bear on a particular computational problem – and now many problems are indeed computational problems. Such then that the theoretical question posed by compute is directly relevant to the study of software, algorithms and code, and therefore the contemporary condition in computal society, because it represents the moment of potential in the transformation of inert materials into working systems. It is literally the computational unit of “energy” that is supplied to power the algorithms of the world’s systems. Compute then, is a notion of abstract computation, but it is also the condition of possibility for and the potential actuation of that reserve power of computation in a particular task. Compute becomes a key noetic means of thinking through the distribution of computation in the technological imaginary of computal society.

In a highly distributed computational environment, such as we live in today, compute is itself distributed around society, carried in pockets, accessible through networks and wireless connections and pooled in huge computational clouds. Compute then is not only abstract but lived and enacted in everyday life, it is part of the texture of life, not just as a layer upon life but as a structural possibility for and mediation of such living. But crucially, compute is also an invisible factor in society, partially due to the obfuscation of the technical condition of the production of compute, but also due to the necessity for an interface, a surface, with which to interact with compute. Compute then as a milieu is such that it is never seen as such, even as it surrounds us and is constantly interacting with and framing our experiences. Indeed, Stiegler (2009) writes that,

Studying the senses, Aristotle underlines in effect that one does not see that, in the case of touching, it is the body that forms the milieu, whereas, for example, in the case of sight, the milieu is what he calls the diaphane. And he specifies that this milieu, because it is that which is most close, is that which is structurally forgotten, just as water is for a fish. The milieu is forgotten, because it effaces itself before that to which is gives place. There is always already a milieu, but this fact escapes us in the same way that “aquatic animals,” as Aristotle says, “do not notice that one wet body touches another wet body” (423ab): water is what the fish always sees; it is what it never sees. Or, as Plato too says in the Timaeus, if the world was made of gold, gold would be the sole being that would never be seen – it would not be a being, but the inapparent being of that being, appearing only in the occurrence of being, by default (Stiegler 2009: 13-14)

In this sense, compute, is the structural condition of possibility that makes the milieu possible by giving it place, in as much as it creates those frameworks within which technicity takes place. The question of compute then, both as a theoretical concept but also as a technical definition is crucial for thinking through the challenge of computation more broadly. But, in a rapidly moving world of growing computational power, comparative analysis of computational change is made difficult without a metric by which to compare different moments historically. This is made much more difficult by the reality that compute is not simply the speed and bandwidth of a processor as such, but includes a number of other related technical considerations such as the speed of the underlying motherboard, ram, graphics processor(s), storage system and so forth.

Compute then is a relative concept and needs to be thought about in relation to previous iterations, and this is where benchmarking has become an important part of the assessment of compute – for example SPECint, a computer benchmark specification for a processor’s integer processing power maintained by the Standard Performance Evaluation Corporation (SPEC 2014). Another, called GeekBench (2013), scores compute against a baseline score of 2500, which is the score of an Intel Core i5-2520M @ 2.50 GHz. In contrast, SYSmark 2007, another benchmark, attempts to bring “real world” applications into the processing measurement by including a number of ideal systems that run canned processing tasks (SYSmark 2007). As can be seen, comparing compute becomes a spectrum of benchmarks that test a variety of working definitions of forms of processing capacity. It is also unsurprising that as a result many manufactures create custom modes within their hardware to “game” these benchmarks and unfortunately obfuscate these definitions and comparators. For example,

Samsung created a white list for Exynos 5-based Galaxy S4 phones which allow some of the most popular benchmarking apps to shift into a high-performance mode not available to most applications. These apps run the GPU at 532MHz, while other apps cannot exceed 480MHz. This cheat was confirmed by AnandTech, who is the most respected name in both PC and mobile benchmarking. Samsung claims “the maximum GPU frequency is lowered to 480MHz for certain gaming apps that may cause an overload, when they are used for a prolonged period of time in full-screen mode,” but it doesn’t make sense that S Browser, Gallery, Camera and the Video Player apps can all run with the GPU wide open, but that all games are forced to run at a much lower speed (Schwartz 2013).

On a material register the unit of compute can be thought of as roughly the maximum potential processing capacity of a computer processing chip running for a notional hour. In todays softwarized landscape, of course, processing power itself become a service and hence more often is framed in terms of virtual machines (VMs), rather than actual physical machines – a number of compute instances can be realised on a single physical processor using sophisticated software to manage the illusion. Amazon itself defines compute through an abstraction of actual processing as follow,

Transitioning to a utility computing model fundamentally changes how developers have been trained to think about CPU resources. Instead of purchasing or leasing a particular processor to use for several months or years, you are renting capacity by the hour. Because Amazon EC2 is built on commodity hardware, over time there may be several different types of physical hardware underlying EC2 instances. Our goal is to provide a consistent amount of CPU capacity no matter what the actual underlying hardware (Amazon 2013).

Indeed, Amazon tends to discuss compute in relation to its unit of EC2 Compute Unit (ECU) to enable the discretisation.[1] Google also uses an abstract quantity and measures “minute-level increments” of computational time (Google 2013). The key is to begin thinking about how an instance provides a predictable amount of dedicated compute capacity and as such is a temporal measure of computational power albeit seemingly defined rather loosely in the technical documentation.

The question of compute is then a question of the origin of computation more generally, but also how the infrastructure of computation can be understood both qualitatively and quantitatively. Indeed, it is clear that the quantitative changes that greater compute capacity introduces makes possible the qualitative experience of computation that we increasingly take for granted in our use of a heavily software-textured world. To talk about software, processes, algorithms and code is then deficient without a corresponding understanding of the capacity of compute in relation to them and a key question for thinking about the conditions of possibility that computation make possible for our lives today.


[1] Amazon used to define the ECU directly, stating: “We use several benchmarks and tests to manage the consistency and predictability of the performance of an EC2 Compute Unit. One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. This is also the equivalent to an early-2006 1.7 GHz Xeon processor referenced in our original documentation” (Berninger 2010). They appear to have stopped using this description in their documentation (see Amazon 2013). 


Amazon (2013) Amazon EC2 FAQs, accessed 05/01/2014,

Berninger, D. (2010) What the heck is an ECU?,  accessed 05/01/2014,

GeekBench (2013) GeekBench Processor Benchmarks, accessed 05/01/2014,

Google (2013) Compute Engine — Google Cloud Platform, accessed 05/01/2014,

Schwartz, R. (2013) The Dirty Little Secret About Mobile Benchmarks,  accessed 05/01/2014,

SPEC (2014) The Standard Performance Evaluation Corporation (SPEC), accessed 05/01/2014,

Stiegler, B. (2009) Acting Out, Stanford University Press.

SYSmark (2007),  SYSmark 2007 Preview, accessed 05/01/2014,

The Author Signal: Nietzsche’s Typewriter and Medium Theory

Malling-Hansen Writing Ball

One of the more poignant moments in Nietzsche’s long and tormented career was when the catalogue of his many ailments, both mental and physical, started to include encroaching blindness. To remedy that he turned to experimentation with the (very primitive) typewriters of the time in 1882 – a Malling-Hansen Writing Ball. This was a major crisis in his writing as he had to accustom himself to what must have seemed almost an entirely new medium and led him to confess that “our writing tools are also working on our thoughts” (quoted in Kittler 1999). Nietzsche, who had dreamed of a machine that would transcribe his thoughts, choose the machine whose “rounded keyboard could be used exclusively through the sense of touch because on the surface of the sphere each spot is designated with complete certainty by its spatial position” (Kittler 1992: 193). Indeed, as Carr (2008) argues “once he had mastered touch-typing [with the new typewriter], he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page.” The condition of possibility created by a particular medium forms an important part of the theoretical foundations of medium theory, which questions the way in which medial changes lead to epistemic changes. This has become an important area of inquiry in relation to the differences introduced by computation and digital media, more generally (see Berry 2011). Indeed, in Nietzsche’s case,

One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language often depend on the quality of pen and paper.”… “You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts” (Carr 2008).

Stylistics and perhaps above all its younger and computerized daughter, stylometry, have already attempted to find stylistic or stylometrical traces (the “author signal”) of similar changes in writing practices by authors – with little positive result. The case of Henry James’s move from handwriting (typewriting) to dictation in the middle of What Maisie Knew has been studied by Hoover (2009). Yet, according to the NYU professor, the author of The Ambassadors took this sudden change in his stride and, despite the fact that we know exactly where the switch occurred, stylometry has been helpless in this case; or, rather, can show no sudden shift in James’s stylistic evolution that continues throughout his career (Hoover 2009). In a way, a similar problem was addressed by Le, Lancashire, Hirst and Jokel (2011) in their study of possible symptoms of Alzheimer’s disease in Agatha Christie word usage and to confirm the same diagnosis in Iris Murdoch. From another perspective, many studies exist on various authors’ switch from handwriting or typing to word processing (see also Lev Manovich’s [2008] work on cultural analytics).

Letter from Friedrich Nietzsche to 
Heinrich Köselitz, Geneva, Feb 17, 1882. 
Earliest typewriter-written text by
 Nietzsche still in existence.

Nietzsche’s case seemed somewhat more promising as his attempts at typewriting were not only commented on by him but also made at a very early stage of mechanical text production – and at the overlap between discourse networks (Kittler 1992: 193). Although Nietzsche is thought to have only used the typewriter for a short period during 1882, an experiment claimed to have lasted either weeks (Kittler 1992), or up to a couple of months (Kittler 1999: 206) – although Günzel and Schmidt-Grépály (2002) more concretely state he typed between February to March 1882 when Nietzsche was also finishing The Gay Science. In fact, Nietzsche produced a collection of typed works he titled 500 Aufschriften auf Tisch und Wand: Für Narrn von Narrenhand. Nietzsche himself commented, “after a week [of typewriting practice,] the eyes no longer have to do their work” (Kittler 1999: 202).[1] Indeed, the technological shock may have been much stronger here than in the case of James or of authors who, some twenty years ago, enthusiastically exchanged white correction fluid for the word-processor delete button and cut-and-paste. Using the typewriter, Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style” (Kittler 1999: 203). Indeed, Kittler argues that,

Neitzsche’s reasons for purchasing a typewriter were very different from those of his colleagues who wrote for entertainment purposes, such as Twain, Lindau, Amytor, Hart, Nansen, and so on. They all counted on increased speed and textual mass production; the half-blind, by contrast, turned from philosophy to literature, from rereasing to a pure, blind, and intransitive act of writing (Kittler 1999: 206). 

In other words, the inscription technologies of Nietzsche’s time have contributed to his thinking. Nevertheless for Nietzsche the typewriter was “more difficult than the piano, and long sentences were not much of an option” (Emden 2005: 29). Although after his failed experimentation with the typewriter, he remained enthralled by its possibilities – “the assumed immediacy of the written word… seemingly connected in a direct way to the thoughts and ideas of the author through the physical movement of the hand… was displaced by the flow of disconnected letters on a page, one as standardized as another” (Emden 2005: 29).

The turning point for Kittler (1999) is represented by The Genealogy of Morals which was written in 1887 – by now Nietzsche was forced by continued poor vision to use secretaries to record his words. Here, it is argued that Nietzsche elevated the typewriter itself to the “status of a philosophy,” suggesting that “humanity had shifted away from its inborn faculties (such as knowledge, speech, and virtuous action) in favor of a memory machine. [When] crouched over his me­chanically defective writing ball, the physiologically defective philosopher [had] realize[d] that ‘writing . . . is no longer a natural extension of humans who bring forth their voice, soul, individuality through their handwriting. On the contrary, . . . humans change their position – they turn from the agency of writing to become an inscription surface'” (Winthrop-Young and Wutz 1999: xxix).

In the very tentative analysis presented here (and which must be redone with a greater collection of Nietzsche’s works), the standard stylometric procedure of comparing normalized word frequencies of the most frequent words in the corpus was applied by means of the “stylo” (ver. 0-4-7) script for the R statistical programming environment (Eder and Rybicki 2011).

The script converts the electronic texts to produce complete most-frequent-word (MFW) frequency lists, calculates their z-scores in each text according to the Delta procedure (Burrows 2002); uses the top frequency lists for analysis; performs additional procedures for better accuracy (including Hoover’s culling, the removal of all words that do not appear in all the texts for better independence of content); compares the results for individual texts; produces Cluster Analysis tree diagrams that show the distances between the texts; and, finally, combines the tree diagrams made for various parameters (number of words used in each individual analysis) in a bootstrap consensus tree (Dunn et al. 2005, quoted in Baayen 2008: 143-147). The script, in its ever-evolving versions, is available online (Eder, Rybicki and Kestemont 2012). The consensus tree approach, based as it is on numerous iterations of attribution tests at varying parameters, has already shown itself as a viable alternative to single-iteration analyses (Rybicki 2012, Eder and Rybicki 2012).

The first analysis was performed for complete texts of six works by Nietzsche:  Die Geburt der Tragödie (1872) and Menschliches, Allzumenschliches (1878), both written before 1879, his “year of blindness,” and his typewriter experiments of 1882, and Also sprach Zarathustra (1883-5), Jenseits von Gut und Böse (1886), Ecce homo and Götzen-Dämmerung (1888). The resulting graph suggest a chronological evolution of Nietzschean style as the early works cluster to the right, and the later ones to the left of Figure 1.

Figure 1, chronological evolution of Nietzschean style

Yet the pattern above shares the usual problem of multivariate graphs for just a few texts: a possibility of randomness in the order of clusters. This is why it makes sense to perform another analysis, this time on the above texts divided into equal-sized segments (10,000 words is usually safe). Figure 2 confirms the chronological evolution pattern as the segments of each individual book are correctly clustered together. What is more, the previous result is corroborated by a very similar pattern in terms of creation date.
Figure 2, chronological evolution pattern as segments of each individual book are clustered together
As has been said above, a greater number of texts is needed to confirm these initial findings. There is indeed a clear division of Nitzschean style into early and late(r). Whether this is a repetition of a phenomenon observed in many other writers (Henry James, for one), or a direct impact of technological change and therefore a confirmation of the claims of medium theory, remains to be investigated. Nonetheless, this approach offers an additional method to explore how medial change can be mapped in relation to changes in knowledge. It also offers a potential means for exploring the way in which contemporary debates over the introduction of computational and digital means of creating, storing and distributing knowledge affect the way in which authorship itself is undertaken. 
This doesn’t just have to be strictly between mediums, and there is potential for exploring intra-medial change and the way in which writing has been influenced by the long dark ages of Microsoft Word as the hegemonic form of digital writing (1983-2012), and which gradually appears to be coming to an end in the age of locative media, apps, and real-time streams. Indeed, with exploratory digital literature forms, represented in ebooks, computational document format (CDF) and apps, such as Tapestry, which allow the creation of “tap essays” (Gannes 2012), new ways of authoring and presenting knowledge are suggested. Only a short perusal of Apple iBooks Author, for example, shows the way in which the paper forms underlying the digital writings of the 20th Century, are giving way to new ways of writing and structuring text within the framework of a truly digital medium made possible through tablet computers, smart phones and the emerging “tabs, pads and boards” three-screen world
With digital forms, new ways of presenting and storing knowledge are also constructed, not just the relational database, but also object-oriented, graph and other forms, and which people are increasingly familiar with as modes of practice in relation to manipulating knowledge. How this will change the writing of future literature remains to be seen, but Kittler clearly foresaw an important turn in the way in which we should research and understand these processes, writing,

To put it plainly: in contrast to certain collegues in media studies, who first wrote about French novels before discovering French cinema and thus only see the task before them today as publishing one book after another about the theory and practice of literary adaptations… In contrast to such cheap modernizations of the philological craft, it is important to understand which historical forms of literature created the conditions that enabled their adaptation in the first place. Without such a concept, it remains inexplicable why certain novels by Alexandre Dumas, like The Three Musketeers, have been adapted for film hundreds of times, while old European literature, from Ovid’s Metamorphoses to weighty baroque tomes, were simple non-starters for film… It is possible… to conclude from the visually hallucinatory ability that literature acquired around 1800 that a historically changed mode of perception had entered everyday life. As we know, after a preliminary shock Europeans and North Americans learned very quickly and easily how to decode film sequences. They realized that film edits did not represent breaks in the narrative and that close-ups did not represent heads severed from bodies. (Kittler 2009: 108)

Equally, today in a world filled with everyday computational media, Europeans and North Americans are learning very quickly to adapt to the real-time streaming media of the 21st Century. We are no longer surprised when live television is paused to make a drink, or our mobile phone tells us that we are running late for a meeting and offers us a quicker route to get to the location. Nor are we perplexed by multiple screens, screens within screens, transmedia storytelling, social media, or even contextual navigation and adaptive user interfaces. Thus new social epistemologies are emerging in relation to computational media, that is, “the conditions under which groups of agents (from generations to societies) acquire, distribute, maintain and update (claims to) belief and knowledge [has changed] through the active mediation of code/software” (Berry 2012: 380). Again, a historically changed mode of perception has entered everyday life, and which we can explore through its traces in cultural artefacts, such as literature, film, television, software and so forth. 
With the suggestive analysis offered in this short article, we hope to have demonstrated how computational approaches can create research questions in relation to medium theory, and which although not necessary offering conclusive results, nonetheless press us to explore further the links between medial and epistemic change. 
David M. Berry and Jan Rybicki


[1] According to Günzel and Schmidt-Grépály (2002), Nietzsche typed 15 letters, 1 postcard and 34 bulk sheets (including some poems and verdicts) with his ‘Schreibkugel‘ from Malling-Hansen in 1882.

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave.

Berry, D. M. (2012) The Social Epistemologies of Software, Social Epistemology: A Journal of Knowledge, Culture and Policy, 26:3-4, 379-398

Burrows, J.F. (2002) “Delta: A Measure of Stylistic Difference and a Guide to Likely Authorship,” Literary and Linguistic Computing 17: 267-287.
Carr, N. (2008) Is Google Making Us Stupid?, The Atlantic, accessed 19/12/2012,
Dunn, M., Terrill, A., Reesink, G., Foley, R.A. and Levinson, S.C. (2005) “Structural Phylogenetics and the Reconstruction of Ancient Language History,” Science 309: 2072-2075. Quoted in Baayen, R.H. (2008) Analyzing Linguistic Data. A Practical Introduction to Statistics using R, Cambridge: Cambridge University Press.
Eder, M. and Rybicki, J. (2011). Stylometry with R. Stanford: Digital Humanities 2011.
Eder, M. Rybicki, J., and Kestemont, M. (2012). Computational Stylistics, accessed 19/12/2012,
Eder, M. and Rybicki, J. (2012). “Do Birds of a Feather Really Flock Together, or How to Choose Test Samples for Authorship Attribution,” Literary and Linguistic Computing, First published online August 11, 2012: 10.1093/llc/fqs036.
Emden, C. (2005) Nietzsche On Language, Consciousness, And The Body, University of Illinois Press.

Gannes, L. (2012) When an App Is an Essay Is an App: Tapestry by Betaworks , Wall Street Journal
Günzel, S. and Schmidt-Grépály, R. (2002) (eds.) Friedrich Nietzsche. Schreibmaschinentexte, 2nd edition, Weimar: Verlag der Bauhaus Universität, accessed 19/12/2012,

Kittler, F. A. (1992) Discourse Networks, 1800/1900, Stanford University Press.
Kittler, F. A. (1999) Gramophone, Film, Typewriter translated by Geoffrey Winthrop-Young and Michael Wutz, Stanford: Standford University Press, 200-208, quoted in Patricia Falguières, “A Failed Love Affair with the Typewriter”, rosa b,  accessed 19/12/2012, 
Kittler, F. A. (2009) Optical Media, London: Polity Press. 
Hoover, David L. (2009) “Modes of Composition in Henry James: Dictation, Style, and What Maisie Knew,” Digital Humanities 2009, University of Maryland, June 22-25.
Le, X., Lancashire, I., Hirst, G., and Jokel, R. (2011) “Longitudinal detection of dementia through lexical and syntactic changes in writing: a case study of three British novelists,” Literary and Linguistic Computing, 26(4): 435-461
Manovich, L. (2008) Cultural Analytics, accessed 19/12/2012,
Rybicki, J. (2012) “The Great Mystery of the (Almost) Invisible Translator: Stylometry in Translation.” In Oakes, M., Ji, M. (eds). Quantitative Methods in Corpus-Based Translation Studies, Amsterdam: John Benjamins.
Winthrop-Young, G. and Wutz, M. (1999) Translators’ Introduction, in Kittler, F. A., Gramophone, Film, Typewriter, Standford University Press.

Against Remediation

A new aesthetic through Google Maps

In contemporary life, the social is a site for a particular form of technological focus and intensification. Traditional social experience has, of course, taken part in various forms of technical mediation, formatting and subject to control technologies. Think, for example, of the way in which the telephone structured the conversation, diminishing the value of proximity, whilst simultaneously intensifying certain kinds of bodily response and language use. It is important, then to trace media genealogies carefully and to be aware of the previous ways in which the technological and social have met – and this includes the missteps, mistakes, dead-ends, and dead media. This understanding of media, however, has increasingly been understood in terms of the notion of remediation, which has been thought to helpfully contribute to our thought about media change, whilst sustaining a notion of medium specificity. Bolter and Grusin (2000), who coined its contemporary usage, state,

[W]e call the representation of one medium in another remediation, and we will argue that remediation is a defining characteristic of the new digital media. What might seem at first to be an esoteric practice is so widespread that we can identify a spectrum of different ways in which digital media remediate their predecessors, a spectrum depending on the degree of perceived competition or rivalry between the new media and the old (Bolter and Grusin 2000: 45).

However, it seems to me that we now need to move beyond talk of the remediation of previous modes of technological experience and media, particularly when we attempt to understand computational media. I think that this is important for a number of reasons, both theoretical and empirical. Firstly, in a theoretical vein, the concept of remediation has become a hegemonic concept and as such has lost its theoretical force and value. Remediation traces its intuition from McLuhan’s notion that the content of a new media is an old media – McLuhan actually thought of “retrieval” as a “law” of media. But it seems to me that beyond a fairly banal point, this move has the effect of both desensitising us to the specificity and materiality of a “new” media, and more problematically, resurrecting a form of media hauntology, in as much as the old media concepts “possess” the new media form. Whilst it might have held some truth for the old “new” media, although even here I am somewhat sceptical, within the context of digital, and more particularly computational media, I think the notion is increasingly unhelpful. Secondly, remediation gestures toward a depth model of media forms, within which it encourages a kind of originary media, origo, to be postulated, or even to remain latent as an a priori. This enables a form of reading of the computational which justifies a disavowal of the digital, through a double movement of simultaneously exclaiming the newness of computational media, whilst hypostatizing a previous media form “within” the computational.

Thirdly, I do not believe that it accurately describe the empirical situation of computational media, and in fact obfuscates the specificity of the computational in relation to its structure and form. This has a secondary effect in as much as analysis of computational media is viewed through a lens, or method, that is legitimated through this prior claim to remediation. Fourthly, I think remediation draws its force through a reliance on an occularity, that is, remediation is implicitly visual in its conceptualisation of media forms, and the way in which one media contains another, relies on a deeply visual metaphor. This is significant in relation to the hegemony of the visual form of media in the twentieth century. Lastly, and for this reason, I think it is time for us to historicize the concept of remediation. Indeed remediation seems to me to be a concept appropriate to the technologies of media of the twentieth century, and shaped by the historical context of thinking about media in relation to the materialities of those prior media forms and the constellation of concepts which appeared appropriate to them. We need to think computational media in terms which de-emphasize, or certainly reduce the background assumptions to remediation as something akin to a looking glass, and think in terms of a medium as an agency or means of doing something – this means thinking beyond the screenic.

So in this paper, in contrast to talk about “remediation”, and in the context of computational media, I want to think about de-mediation, that is, when a media form is no longer dominant, becoming marginal, and later absorbed/reconstructed in a new medium which en-mediates it. By en-mediate I want to draw attention to the securing of the boundaries related to a format, that is a representation, or mimesis of a previous media – but it is not the “same”, nor is it “contained” in the new media. This distinction is important as at the moment of enmediation, computational categories and techniques transform the newly enmediated form – I am thinking here, for example, of the examples given by the new aesthetic and related computational aesthetics. By enmediate I want to draw links with Heidegger’s notion of enframing (Gestell) and the structuring providing by a condition of possibility, that is a historical constellation of concepts.  I also want to highlight the processual computational nature of en-mediation, in other words, enmediation requires constant work to stabilize the enmediated media. In this sense, computational media is deeply related to enmediation as a total process of mediation through digital technologies. One way of thinking about enmediation is to understand it as gesturing towards a notion of a paradigmatic shift in the way in which “to mediate” should be understood, and which does not relate to the “passing through”, or “informational transfer” as such, but rather enmediate, in this discussion, aims to enumerate and uncover the specificity of computational mediation as mechanic processing.

I therefore want to move quickly to thinking about what it means to enmediate the social. By the term “social” I am particularly thinking in terms of the meditational foundations for sociality that were made available in twentieth century media, and which when enmediated become something new. So sociality is not remediated, it is enmediated – that is the computational mediation of society is not the same as the mediation processes of broadcast media, rather it has a specificity that is occluded if we rely on the concept of remediation to understand it. Thus, it is not an originary form of sociality that is somehow encoded within media (or even constructed/co-constructed), and which is re-presented in the multiple remediations that have occurred historically. Rather it is the enmediation of specific forms of sociality, which in the process of enmediation are themselves transformed, constructed and made possible in a number of different and historically specific modes of existence.

Bolter, J. D. and Grusin, R. (2000) Remediation: Understanding New Media, MIT Press.

New Aesthetic Argumentum Ad Hominem

Papercraft Self Portrait – 2009 (Testroete)

One of the most frustrating contemporary ways to attack any new idea, practice or moment is to label it as “buzz-worthy” or an “internet meme”. The weakness of this attack should be obvious, but strangely it has become a powerful way to dismiss things without applying any any critical thought to the content of the object of discussion. In other words it is argumentation petitio principii, where the form of the argument is “the internet meme, the new aesthetic, should be ignored because it is an internet meme”. Or even, in some forms, an argumentum ad hominem, where the attack is aimed at James Bridle (as the originator of the term) rather than the new aesthetic itself. Equally, the attacks may also be combined.

I think the whole ‘internet meme’, ‘buzz’, ‘promotional strategy’ angle on the new aesthetic is indicative of a wider set of worries in relation to a new scepticism, as it were (related also to the skepticism movement too, possibly). We see it on Twitter where the medium of communication seems to encourage a kind of mass scepticism, where everyone makes the same point simultaneous that the other side is blindly following, a ‘fanboy’, irrational, suspect, or somehow beholden to a dark power to close, restrict or tighten individual freedoms – of course, the ‘I’ is smart enough to reject the illusion and unmask the hidden forces. This is also, I think, a worry of being caught out, being laughed at, or distracted by (yet) another internet fad. I also worry that the new aesthetic ‘internet meme’ criticism is particularly ad hominem, usually aimed, as it is, towards its birth within the creative industries. I think we really need to move on from this level of scepticism and be more dialectical in our attitude towards the possibilities in, and suggested by, the new aesthetic. This is where critical theory can be a valuable contributor to the debate.

For example, part of the new aesthetic, is a form of cultural practice which is related to a postmodern and fundamentally paranoid vision of being watched, observed, coded, processed or formatted. I find particularly fascinating the aesthetic dimension to this, in as much as the representational practices are often (but not always) retro, and in some senses, tangential to the physical, cultural, or even computational processes actually associated with such technologies. This is both, I suppose, a distraction, in as much as it misses the target, if we assume that the real can ever be represented accurately (which I don’t), but also and more promisingly an aesthetic that remains firmly human mediated, contra to the claims of those who want to “see like machines”. That is, the new aesthetic is an aestheticization of computational technology and computational techniques more generally. It is also fascinating in terms of the refusal of the new aesthetic to abide by the careful boundary monitoring of art and the ‘creative industry’ more generally, really bringing to the fore the questions raised by Liu, for example, in The Laws of Cool. One might say that it follows the computational propensity towards dissolving of traditional boundaries and disciplinary borders.

I also find the new aesthetic important for it has an inbuilt potentiality towards critical reflexivity, both towards itself (does the new aesthetic exist?) but also towards both artistic practice (is this art?), curation (should this be in galleries?), and technology (what is technology?). There is also, I believe, an interesting utopian kernel to the new aesthetic, in terms of its visions and creations – what we might call the paradigmatic forms – which mark the crossing over of certain important boundaries, such as culture/nature, technology/human, economic/aesthetic and so on. Here I am thinking of the notion of augmented humanity, or humanity 2.0, for example. This criticality is manifested in the new aesthetic’s continual seeking to ‘open up’ black boxes of technology, to look at developments in science, technology and technique and to try to place them within histories and traditions – in the reemergence of social contradictions, for example. But even an autonomous new aesthetic, as it were, points towards the anonymous and universal political and cultural domination represented by computational techniques which are now deeply embedded in systems that we experience in all aspects of our lives. There is much to explore here.

Moroso pixelated sofa and nanimaquina rug, featured on Design Milk

The new aesthetic, of course, is as much symptomatic of a computational world as itself subject to the forces that drive that world. This means that it has every potential to be sold, standardised, and served up to the willing mass of consumers as any other neatly packaged product. Perhaps even more so, with its ease of distribution and reconfiguration within computational systems, such as Twitter and Tumblr. But it doesn’t have to be that way, and so far I have more hope that it even in its impoverished consumerized form, it still serves to serve notice of computational thinking and processes, which stand out then against other logics. This is certainly one of the interesting dimensions to the new aesthetic both in terms of the materiality of computationality, but also in terms of the need to understand the logics of postmodern capitalism, even ones as abstract as obscure computational systems of control.

For me, the very possibility of a self-defined new ‘aesthetic’ enables this potentiality – of course, there are no simple concepts as such, but the new aesthetic, for me, acts as a “bridge” (following Deleuze and Guattari for a moment). By claiming that it is new ‘aesthetic’ makes possible the conceptual resources associated with and materialised in practices, which may need to be “dusted off” and to be used as if they were, in a sense, autonomous (that is, even, uncritical). This decoupling of the concept (no matter that in actuality one might claim that no such decoupling could really have happened) potentially changes the nature of the performances that are facilitated or granted by the space opened within the constellation of concepts around the ‘new aesthetic’ (again, whatever that is) – in a sense this might also render components within the new aesthetic inseparable as the optic of the new aesthetic, like any medium, may change the nature of what can be seen. Again, this is not necessarily a bad thing though.

Glitch Textiles by Phillip David Stearns

Another way of putting it, perhaps, would be that a social ontology is made possible, which, within the terms of the the constellation of practices and concepts grounding it, is both distanced from and placed in opposition to existing and historical practices. Where this is interesting is that, so far, the new aesthetic, as a set of curatorial or collectionist practices, has been deeply recursive in its manifestation – both computational in structure (certainly something I am interested in about it) – and also strikingly visual (so far) – and here the possibility of an immanent critique central to the new aesthetic can be identified, I think. Of course, it is too early to say how far we can push this, especially with something as nascent as the new aesthetic, which is still very much a contested constellation of concepts and ideas and playing out in various media forms, etc., but nonetheless, I suggest that one might still detect the outlines of a kind of mediated non-identity implicit within the new aesthetic, and this makes it interesting. So I am not claiming, in any sense, that the new aesthetic was “founded on critical thinking”, rather that in a similar way that computational processes are not “critical thinking” but contain a certain non-reflexive reflexivity when seen through their recursive strategies – but again this is a potentiality that needs to be uncovered, and not in any sense determined. This is, perhaps, the site of a politics of the new aesthetic.

Certainly there is much work to be done with the new aesthetic, and I, for one, do not think that everything is fixed in aspic – either by Bridle or any of the other commentators. Indeed, there is a need for thinking about the new aesthetic from a number of different perspectives, that for me is the point at which the new aesthetic is interesting for thinking with, and pushing it away seems to me to be an “over-hasty” move when it clearly points to a either a fresh constellations of concepts and ideas, or certainly a means for us to think about the old constellations in a new way. This means that we should not aim to be either for or against the new aesthetic, as such, but rather more interested in the philosophical and political work the new aesthetic makes possible.

New Book: New Aesthetic, New Anxieties

New Aesthetic New Anxieties is the result of a five day Book Sprint organized by Michelle Kasprzak and led by Adam Hyde at V2_ from June 17–21, 2012. Authors: David M. BerryMichel van DartelMichael DieterMichelle KasprzakNat MullerRachel O’Reilly and José Luis de Vicente. Facilitated by: Adam Hyde

You can download the e-book as an EPUB, MOBI, or PDF.




Annotatable online version:

The New Aesthetic was a design concept and netculture phenomenon launched into the world by London designer James Bridle in 2011. It continues to attract the attention of media art, and throw up associations to a variety of situated practices, including speculative design, net criticism, hacking, free and open source software development, locative media, sustainable hardware and so on. This is how we have considered the New Aesthetic: as an opportunity to rethink the relations between these contexts in the emergent episteme of computationality. There is a desperate need to confront the political pressures of neoliberalism manifested in these infrastructures. Indeed, these are risky, dangerous and problematic times; a period when critique should thrive. But here we need to forge new alliances, invent and discover problems of the common that nevertheless do not eliminate the fundamental differences in this ecology of practices. In this book, perhaps provocatively, we believe a great deal could be learned from the development of the New Aesthetic not only as a mood, but as a topic and fix for collective feeling, that temporarily mobilizes networks. Is it possible to sustain and capture these atmospheres of debate and discussion beyond knee-jerk reactions and opportunistic self-promotion? These are crucial questions that the New Aesthetic invites us to consider, if only to keep a critical network culture in place.

Tagged , , , ,

New Book: Life in Code and Software: Mediated life in a complex computational ecology

Life in Code and Software (cover image by Michael Najjar)

New book out in 2012 on Open Humanities PressLife in Code and Software: Mediated life in a complex computational ecology. 


This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. Life in Code and Software introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological, which I call computationality, and within which, code and software become the paradigmatic forms of knowing and doing. Such that other candidates for this role, such as: air, the economy, evolution, the environment, satellites, etc., are understood and explained through computational concepts and categories.




The New Bifurcation? Object-Oriented Ontology and Computation

Alan Turing

There are now some interesting challenges emerging to the philosophical systems described in object-oriented ontology, such as Alex Galloway’s recent piece, ‘A response to Graham Harman’s “Marginalia on Radical Thinking”’ and Christian Thorne’s, ‘To The Political Ontologists‘, as well as my own contribution, ‘The Uses of Object-Oriented Ontology‘.

Here, I want to tentatively explore the links between my own notion of computationality as ontotheology and how object-oriented ontology unconsciously reproduces some of these structural features that I think are apparent in its ontological and theological moments. In order to do this, I want to begin outlining some of the ways one might expect the ‘ontological moment’, as it were, to be dominated by computational categories and ideas which seem to hold greater explanatory power. In this regard I think this recent tweet by Robert Jackson is extremely revealing,

Robert Jackson (@Recursive_idiot)

04/06/2012 13:34

I think this Galloway / OOO issue can be resolved with computability theory. Objects / units need not be compatible with the state.

Revealing, too, are the recent discussions by members of object-oriented ontology and the importance of the computational medium for facilitating its reproduction – see Levi Bryant’s post ‘The Materiality of SR/OOO: Why Has It Proliferated?‘, and Graham Harman’s post ‘on philosophical movements that develop on the internet‘.

It is interesting to note that these philosophers do not take account of the possibility that the computational medium itself may have transformed the way in which they understand the ontological dimension of their projects. Indeed, the taken-for-granted materiality of digital media is clearly being referred to in relation to a form of communication theory – as if the internet were merely a transparent transmission channel – rather than seeing the affordances of the medium encouraging, shaping, or creating certain ways of thinking about things, as such.

Of course, they might respond, clearly the speed and publishing affordances allow them to get their messages out quicker, correct them, and create faster feedback and feedforward loops. However, I would argue that the computational layers (software, applications, blogs, tweets, etc.) also discipline the user/writer/philosopher to think within and through particular computational categories. I think it is not a coincidence that what is perhaps the first internet or born-digital philosophy has certain overdetermined characteristics that reflect the medium within which they have emerged. I am not alone in making this observation, indeed, Alexander Galloway has started to examine the same question, writing,

[T]he French philosopher Catherine Malabou asks: “What should we do so that consciousness of the brain does not purely and simply coincide with the spirit of capitalism?”….Malabou’s query resonates far and wide because it cuts to the heart of what is wrong with some philosophical thinking appearing these days. The basic grievance is this: why, within the current renaissance of research in continental philosophy, is there a coincidence between the structure of ontological systems and the structure of the most highly-evolved technologies of postfordist capitalism? I am speaking, on the one hand, of computer networks in general, and object-oriented computer languages (such as Java or C++) in particular, and on the other hand, of certain realist philosophers such as Bruno Latour, but also more pointedly Quentin Meillassoux, Graham Harman, and their associated school known as “speculative realism.” Why do these philosophers, when holding up a mirror to nature, see the mode of production reflected back at them? Why, in short, a coincidence between today’s ontologies and the software of big business? (Galloway, forthcoming, original emphasis)

He further argues:

Philosophy and computer science are not unconnected. In fact they share an intimate connection, and have for some time. For example, set theory, topology, graph theory, cybernetics and general system theory are part of the intellectual lineage of both object-oriented computer languages, which inherit the principles of these scientific fields with great fidelity, and for recent continental philosophy including figures like Deleuze, Badiou, Luhmann, or Latour. Where does Deleuze’s “control society” come from if not from Norbert Wiener’s definition of cybernetics? Where do Latour’s “actants” come from if not from systems theory? Where does Levi Bryant’s “difference that makes a difference” come from if not from Gregory Bateson’s theory of information? (Galloway, forthcoming).

Ian Bogost’s (2012) Alien Phenomenology is perhaps the most obvious case where the links between his computational approach and his philosophical system are deeply entwined, objects, units, collections, lists, software philosophy, carpentry (as programming) etc. Indeed, Robert Jackson also discusses some of the links with computation, making connections between the notion of interfaces and encapsulation, and so forth, in object-oriented programming in relation to forms of object-orient ontology’s notion of withdrawal, etc. He writes,

Encapsulation is the notion that objects have both public and private logics inherent to their components. But we should be careful not to regard the notion that private information is deliberately hidden from view, it is rather the unconditional indifference of objects qua objects. Certain aspects of the object are made public and others are occluded by blocking off layers of data. The encapsulated data can still be related to, even if the object itself fails to reveal it (Jackson 2011).

This, he argues, serves as a paradigmatic example of the object-oriented ontologists’ speculations about objects as objects. Therefore, a research project around object-oriented computational systems would, presumably, allow us to cast light on wider questions about other kinds of objects, after all, objects are objects, in the flat ontology of object-oriented ontology. In contrast, I would argue that it is no surprise that object-oriented ontology and object-oriented programming have these deep similarities as they are drawing from the same computational imaginary, or foundational ideas, about what things are or how they are categorised in the world, in other words a computational ontotheology – computationality.

The next move is the step that Alex Galloway makes, to link this to the wider capitalist order, postfordist or informational capitalism (what I would call Late Capitalism). He then explores how this ideological superstructure is imposed onto a capitalist mode of production, both to legitimate and to explain its naturalness or inevitability. Galloway argues,

(1) If recent realist philosophy mimics the infrastructure of contemporary capitalism, should we not show it the door based on this fact alone, the assumption being that any mere repackaging of contemporary ideology is, by definition, anti-scientific and therefore suspect on epistemological grounds? And (2) even if one overlooks the epistemological shortcomings, should we not critique it on purely political grounds, the argument being that any philosophical project that seeks to ventriloquize the current industrial arrangement is, for this very reason, politically retrograde? (Galloway, forthcoming).

He further writes,

Granted, merely identifying a formal congruity is not damning in itself. There are any number of structures that “look like” other structures. And we must be vigilant not to fetishize form as some kind of divination–just as numerology fetishizes number. Nevertheless are we not obligated to interrogate such a congruity? Is such a mimetic relationship cause for concern? Meillassoux and others have recently mounted powerful critiques of “correlationism,” so why a blindness toward this more elemental correlation?… What should we do so that our understanding of the world does not purely and simply coincide with the spirit of capitalism? (Galloway, forthcoming, original emphasis).

Galloway concludes his article by making the important distinction between materialism and realism, pointing out that materialism must be historical and critical whereas realism tends towards an ahistoricism. By historicising object-oriented ontology, we are able to discern the links between the underlying computational capitalism and its theoretical and philosophical manifestations.

Chales Darwin

More work needs to be done here to trace the trajectories that are hinted at, particularly the computationality I see implicit in object-oriented ontology and speculative realism more generally. But I also want to tentatively gesture towards object-oriented ontology as one discourse contributing to a new bifurcation (as Whitehead referred to the nature/culture split). In this case, not between nature and culture, which today have begun to reconnect as dual hybridised sites of political contestation – for example, climate change – but rather as computation versus nature-culture.

Where nature-culture becomes a site of difference, disagreement, political relativism and a kind of ‘secondary’ quality, in other words ‘values’ and ‘felicity conditions’. Computationality, or some related ontological form, becomes the site of primary qualities or ‘facts’, the site of objectivity, and is foundational, ahistorical, unchanging and a replacement for nature in modernity as the site of agreement upon which a polity is made possible – a computational society.

Here, the abstract nature of objects within object-oriented programming, formal objects which inter-relate to each other and interact (or not), and yet remain deeply computational, mathematical and discrete is more than suggestive of the flat ontology that object-oriented ontology covets. The purification process of object-oriented design/programming is also illustrative of the gradual emptying of the universe of ‘non-objects’ by object-oriented ontology, which then serves to create ontological weight, and the possibility of shared consensus within this new bifurcated world. This creates a united foundation, understood as ontological, a site of objectivity, facts, and with a strict border control to prevent this pure realm being affected by the newly excised nature-culture. Within this new bifurcation, we see pure objects placed in the bifurcated object-space and subjects are located in the nature-culture space – this is demonstrated by the empty litanies that object-oriented ontologists share and which describe abstract objects, not concrete entities. This is clearly ironic in a philosophical movement that claims to be wholly realist and displays again the anti-correlationist paradox of object-oriented ontology.

This ontological directive also points thought towards the cartography of pure objects, propositions on the nature of ‘angels’, ‘Popeye’ and ‘unicorns’, and commentary on commentary in a scholastic vortex through textual attempts to capture and describe this abstract sphere – without ever venturing into the ‘great outdoors’ that object-oriented ontologists claim to respect. What could be closer to the experience of contemporary capitalist experience than the digital mazes that are set up by the likes of Facebook and Google, to trap the user into promises of entertainment and fulfilment by moving deeper and deeper around the social ontologies represented in capitalist social networks, and which ultimately resolve in watching advertisements to fuel computational capitalism?

Galloway rightly shows us how to break this spell, reflected also in the object-oriented ontologists refusal to historicise, through a concrete analysis of the historical and material conditions of production, he writes:

One might therefore label this the postfordist response to philosophical realism in general and Meillassoux in particular: after software has entered history, math cannot and should not be understood ahistorically… math itself, as algorithm, has become a historical actor. (Galloway, forthcoming, original emphasis).


Bogost, I. (2012a) Alien Phenomenology: or What It’s Like To Be A Thing, Minnesota University Press.

Galloway, A. R. (forthcoming) The Poverty of Philosophy: Realism and Postfordism, copy supplied by the author.

Jackson, R. (2011) Why we should be Discrete in Public – Encapsulation and the Private lives of Objects, accessed 04/06/2012,

Tagged , , , ,