Category Archives: software studies

Against the Computational Creep

In this short post I want to think about the limits of computation, not the limits theoretically of the application or theorisation of computation itself, but actually the limits to which computation within a particular context should be contained. This is necessarily a normative position, but what I am trying to explore is the limit at which computation, which can have great advantages to a process, institution or organisation, starts to undermine or corrode the way in which a group, institution or organisation is understood, functions or how it creates a shared set of meanings. Here though, I will limit myself to thinking about the theorisation of this, rather than its methodological implications, and how we might begin to develop a politics of computation that is able to test and articulate these limits and understand the development of a set of critical approaches which are also a politicisation of algorithms and of data.

By computational creep I am interested in the development of computation as a process rather than an outcome or thing (Ross 2017: 14). This notion of “creep” has been usefully identified by Ross in relation to extreme political movements that take place by what he calls “positive intermingling”.[1] I think that this is a useful way to think of the way in which computationalism, and here I do not merely mean the idea that consciousness in modelled on computation (e.g. see Golumbia 2009), but more broadly as a set of ideas and style of thought which argues that computational approaches are by their very nature superior to other ways of thinking and doing (Berry 2011, 2014). This is also related to the notion that anything that has not been “disrupted” by computation is, by definition, inferior in some sense, or is latent material awaiting its eventual disruption or reinvention through the application of computation.  I would like to argue that this process of computational creep takes six stages:

  1. Visionary-computational: Computation suggested as a solution to existing system or informal process. These discourses are articulated with very little critical attention to the detail of making computational systems or the problems they create. Usually, as Golumbia (2017) explains, these draw on metaphysics of information and computation that bear little relation to material reality of the eventual or existing computational systems. It is here, in particular, that the taken-for-grantness of the improvements of computation are uncritically deployed, usually with little resistance. 
  2. Proto-computational: One-off prototypes developed to create notional efficiencies, manage processes, or to ease reporting and aggregation of data. Often there is a discourse associated with the idea that this creates “new ways of seeing” that enable patterns to be identified which were previously missed. These systems often do not meet the required needs but these early failures, rather than taken as questioning the computational, serve to justify more computation, often more radically implemented with greater change being called for in relation to making the computational work. 
  3. Micro-computational: A wider justification for small scale projects to implement computational microsystems. These often are complemented by the discursive rationalisation of informal processes or the justification of these systems due to the greater insight they produce. This is where a decision has been taken to begin computational development, sometimes at a lightweight scale, but nonetheless, the language of computation both technically and as metaphor starts to be deployed more earnestly as justification. 
  4. Meso-computational: Medium-scale systems created which draw from or supplement the existing minimal computation already in process. This discourse is often manifest in multiple, sometime co-exisiting and incompatible computations, differing ways of thinking about algorithms as a solution to problems, and multiple and competing data acquisition and storage practices. At this stage the computational is beyond question, it is taken as a priori that a computational system is required, and where there are failures, more computation and more social change to facilitate it are demanded. 
  5. Macro-computational: Large-scale investment to manage what has become a complex informational and computational ecology. This discourse is often associated with attempts to create interoperability through mediating systems or provision for new interfaces for legacy computational systems. At this stage, computation is now seen as as source of innovation and disruption that rationalises the social processes and helps manage and control individuals. These are taken to be a good in and of themselves to avoid mistakes, bad behaviour, poor social outcomes or suchlike. The computational is now essentially metaphysical in its justificatory deployment and the suggestion that computation might be making things worse is usually met with derision. 
  6. Infra-computational: Calls for overhaul of and/or replacement of major components of the systems, perhaps with a platform, and the rationalisation of social practices through user interface design, hierarchical group controls over data, and centralised data stores. This discourse is often accompanied by large scale data tracking, monitoring and control over individual work and practices. This is where the the notion of top-view, that is the idea of management information systems (MIS), data analytics, large-scale Big Data pattern-matching and control through algorithmic intervention are often reinforced. In this phase a system of data requires free movement of the data through a system through an open definition (e.g. open data, open access, open knowledge), which allows standardisation and sharability of data entities, and therefore of further processing and softwarization. This phase often serves as an imaginary and is therefore not necessarily ever completed, its failures serving as further justification for new infrastructures and new systems to replace earlier failed versions. 

This line of thinking draws on the work of David Golumbia, particularly the notion of Matryoshka Dolls that he takes from the work of Phillip Mirowski. This is in relation to the notion of multiple levels or shells of ideas, that form a system of thinking, but which is itself not necessarily coherent as such, nor lacking in contradiction, particularly at different layers of the shells. This “Mirowski calls the ‘’Russian doll’ approach to the integration of research and praxis in the modern world'” (Golumbia 2017: 5). Golumbia makes links between this way of thinking about neoliberalism as a style of thinking that utilises this multi-layered aspect and technolibertarianism, but here I want to think about computational approaches more broadly, that is as instrumental rational techniques of organisation. In other words, I want to point to the way in which computation is implemented, usually in a small scale way, within an institutional context, and which acts as an entry-point for further rationalisation and computation. This early opening creates the opportunity for more intensive computation which is implicated in a bricolage fashion, that is that, at least initially, there is not a systematic attempt to replace an existing system, but over time, and with the addition to and accretion of computational partialities, calls become greater for the overhaul of what is now a tangled and somewhat contradictory series of micro-computationalisms, into a more broad computational system or platform. Eventually this leads to a macro- or infra-computational environment which can be described as functioning as algorithmic governmentality, but which remains ever unfinished with inconsistencies, bugs and irrationalities throughout the system (see Berns and Rouvroy 2013). The key point is that in all stages of computationally adapting an existing process, there are multiple overlapping and sometimes contradictory processes in operation, even in large-scale computation.

Here I think that Golumbia’s discussion of the “sacred myths among the digerati” is very important here, as it is this set of myths that are unquestioned especially early on in the development of a computational project. Especially at what I am calling the visionary-computational and proto-computational phases, but equally throughout the growth in computational penetration. Some of these myths include: claims of efficiency, the notion of cost savings, the idea of communications improvement, and the safeguarding corporate or group memory. In other words, before a computerisation project is started, these justifications are already being mobilised in order to justify it, without any critical attention to where these a priori claims originate and their likely truth content.

This use of computation is not just limited to standardised systems, of course, and by which I mean instrumental-rational systems that are converted from a paper-based process into a software-based process. Indeed, computation is increasingly being deployed in a cultural and sociological capacity, so for example to manage individuals and their psychological and physical well-being, to manage or shape culture through interventions and monitoring, and the capacity to work together, as teams and groups, and hence to shape particular kinds of subjectivity. Here there are questions more generally for automation and the creation of what we might call human-free technical systems, but also more generally for the conditions of possibility for what Bernard Stiegler calls the Automatic Society (Stiegler 2015). It is also related to the notion of digital and computational systems in areas not previously thought of as amenable to computation, for example in the humanities, as is represented by the growth of digital humanities (Berry 2012, Berry and Fagerjord 2017).

That is to say, that “the world of the digital is everywhere structured by these fictionalist equivocations over the meanings of central terms, equivocations that derive an enormous part of their power from the appearance that they refer to technological and so material and so metaphysical reality” (Golumbia 2017: 34). Of course, the reality is that these claims are often unexamined and uncritically accepted, even when they are corrosive in their implementations. Where these computationalisms are disseminated and their creep goes beyond social and cultural norms, it is right that we ask: how much computation can a particular social group or institution stand, and what should be the response to it? (See Berry 2014: 193 for a discussion in relation to democracy). It should certainly be the case that we must move beyond accepting a partial success of computation to imply that more computation is by necessity better. So by critiquing computational creep, through the notion of the structure of the Russian doll in relation to computational processes of justification and implementation, together with the metaphysical a priori claims for the superiority of computational systems, we are better able to develop a means of containment or algorithmic criticism. Thus through a critical theory that provides a ground for normative responses to the unchecked growth of computations across multiple aspects of our lives and society we can look to the possibilities of computation without seeing it as necessarily inevitable or deterministic of our social life (see Berry 2014).

Notes

[1] The title “Against the Computational Creep” is reference to the very compelling book Against the Fascist Creep by Alexander Reid Ross. The intention is not to make an equivalence between fascism and computation, rather I am more interested in the concept of the “creep” which Ross explains involves small scale, gradual use of particular techniques, the importation of ways of thinking or the use of a form of entryism. In this article, of course, the notion of the computational creep is therefore referring to the piecemeal use of computation, or the importation of computational practices and metaphors into a previously non-computational arena or sphere, and the resultant change in the ways of doing, ways of seeing and ways of being that this computational softwarization tends to produce. 

Bibliography

Berns, T. and Rouvroy, A. (2013) Gouvernementalité algorithmique et perspectives d’émancipation : le disparate comme condition d’individuation par la relation?, accessed 14/12/2016, https://works.bepress.com/antoinette_rouvroy/47/download/

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave Macmillan.

Berry, D. M. (2012) Understanding Digital Humanities, Basingstoke: Palgrave.

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury

Berry, D. M. and Fagerjord, A. (2017) Digital Humanities: Knowledge and Critique in a Digital Age, Cambridge: Polity.

Golumbia, D. (2009) The Cultural Logic of Computation, Harvard University Press.

Golumbia, D. (2017) Mirowski as Critic of the Digital, boundary 2 symposium, “Neoliberalism, Its Ontology and Genealogy: The Work and Context of Philip Mirowski”, University of Pittsburgh, March 16-17, 2017

Stiegler, B. (2016) The Automatic Society, Cambridge: Polity.

Advertisements

Tactical Infrastructures

Infrastructures are currently the subject of much scholarly and activist critique (Hu 2015; Parks and Starosielski 2015; Plantin et al 2016; Starosielski 2015). Perhaps not so much in terms of their critically dissected effects and influences as a form of ideology critique, but more in terms of a new recognition of their importance as conditions of possibility for forms of knowing and acting together with the creation of epistemic stability and modes of knowledge that can be instrumentalised in particular ways (for a discussion, see Berry 2014).[1] In contrast, rather than describe existing infrastructure I would like to think through the way in which counter-infrastructures can be thought about as tactical infrastructures. That is, how through the creation of specific formations, temporary or otherwise, new modes of knowing and thinking, assembling and acting can be made possible by bringing scale technologies together. By tactical infrastructures I am, of course, gesturing towards the rich theoretical work on tactical media which has been extremely important for media activism and theory (see Garcia and Lovink 1997; Raley 2009).[2] I also think it is useful to point towards the work of Liu (2016) and his recent conceptualisation of critical infrastructure studies. I am also drawing on the work of Feenberg who has argued that a critical theory of technology requires “counter-acting the tendencies towards domination in the technological a priori” through the “materialization of values” (Feenberg 2013: 613). This Feenberg argues can be found at specific intervention points within the materialisation of this a priori, such as in design processes. Feenberg argues that “design is the mediation through which the potential for domination contained in scientific-technical rationality enters the social world as a civilisational project” (Feenberg 2013: 613).

Infrastructure is commonly understood as the basic physical and organizational structures and facilities (e.g. buildings, roads, power supplies) needed for the operation of a society or enterprise. It is also sometimes understood as the social and economic infrastructure of a country. Indeed, Parks argues, the word infrastructure “emerged in the early twentieth century as a collective term for the subordinate parts of an undertaking; substructure, foundation”, that is, as what “engineers refer to as ‘stuff you can kick’” (Parks 2015: 355). Infrastructure can be thought of as pre-socialised technologies, not in the sense that the material elements of infrastructure are non-social, but that although they themselves are sociotechnical materialities, they have reached what we might call their quasi-teleological condition. They are latent technologies that are made to be already ready for use, to be configured and reconfigured, and built into particular constellations that form the underlying structures for institutions. Heidegger would say that they are made to stand by. Infrastructure talk also gestures toward a kind of gigantism, the sheer massiveness of fundamental technologies and resources – their size usefully contrasting with the minuteness or ephemerality of the kinds of personal devices that are increasingly merely interfaces or gateways to underlying infrastructural systems.[3] 
Apple highlighting the M9 section of its A9 processor

Today, we talk a lot about data infrastructures, computational materiality for the highly digital sociality we live in today, especially the questions raised in the relations between the social and social media (see also Lovink 2012). But also in terms of the anxiety currently exhibited by a public that has begun to note the datafication of everyday life and the wider effects of a financialized economy. It is also notable that talk of infrastructure seems to allow us to get a grip on the ephemerality of data and computation, its seemingly concreteness as a notion, contrasts with that of clouds, streams, files and flows. So we hear about cables and wires, satellites and receivers, chips and boards, and the sheer thingness of these physical objects, stands in symbolically for the difficulty of visualising the computational objects. I use symbolically deliberately because merely discursively asserting a materiality does not make it material. Indeed, most people have never seen an “actual” satellite or an undersea data cable, nor indeed a computer chip or circuit board. They rely on mediations provided by visual representations such as photography, or videos, that show the thingness of the cables or chips by photographing it. One is reminded of Apple’s turn towards a postdigital aesthetic of chip representation, gloriously shown in glossy marketing videos and component diagrams, displayed in keynote presentations that whilst iterating the chip speeds, transistor numbers and cycles, dives and swoops over the visualised architecture of the device, selecting and showing black squares in light borders on the CPUs of their phones and computers (see Berry and Dieter 2015). The showing of the chip materiality, seeing it in place, within the device, translates the threatening opaqueness of computation into a design motif.  

In terms of infrastructures we might consider the ways in which particular practices of Silicon Valley have become prevalent and tend to shape thinking across the fields effected by computation. For example, the recent turn towards what has come to be called “platformisation”, that is the construction of a single digital system that acts as a technical monopoly within a particular sector (for a discussion, see Gillespie 2010; Plantin et al 2016). The obvious example here is Facebook in social media. Equally, with discussion over digital research infrastructures there is an understandable tendency towards centralisation and the development of unitary and standardised platforms for the digitalisation, archiving, researching and transformation of such data. Whilst most of these attempts have so far ended in failure, it remains the case that the desire and temptation to develop such a system is very strong as it creates a transitional path towards institutionalisation of infrastuctures and the alignment of technologies towards an institutional goal or end. 

I am interested here in how infrastructures become institutions, and more particularly how tactical infrastructures can be positioned to change or replace institutions. As Tocqueville observed, “what we call necessary institutions are often no more than institutions to which we have grown accustomed.” This is to take forward Merton’s notion that only appropriate institutional change can breakthrough problematic or tragic institutional effects (Merton 1948). I also want to move our attention beyond infrastructures and point their tactical use towards making institutions in order to think about institutions as knowing-spaces, and how they force us to consider the political economic issues of making institutions, combined with a focus on creating specific epistemic communities within them. Here I am thinking of Fleck’s notion of a “thought collective” as a “nexus of knowledge which manifests itself in a social constraint upon thought” (Fleck 1979:64). For example, Benkler (2006: 23) has called for a “core common infrastructure”, or a space of non-owned cultural production, making links between the particular values embedded in free-software infrastructures and the kinds of institutions and communities made possible. As he writes, particularly in relation to the internet, “if all network components are owned… then for any communication there must be a willing sender, a willing recipient, and a willing infrastructure owner. In a pure property regime, infrastructure owners have a say over whether, and the conditions under which, others in their society will communicate with each other. It is precisely the power to prevent others from communicating that makes infrastructure ownership a valuable enterprise” (Benkler 2006: 155).

We can think about how institutions generate alternate instantiations of space and time, which thus create the conditions of possibility for new forms of intentionality, thought and action. This also connects to the regulatory aspects of the forms of governance made possible in and through the structures of organization of an institution, and how through combining tactical infrastructures with activism they might be subverted or jammed. In Fleck’s terms this would be to think about the relation between the “thought style”, “thought collective” and the problem of infrastructures. He writes, the thought style “is characterized by common features in the problems of interest to a thought collective, by the judgment which the thought collective considers evident, and by the methods which it applies as a means of cognition” (Fleck 1979: 99). By connecting the affective and cognitive styles and performances made possible within an institution, structured by the particular constellations of infrastructures deployed, we might begin to create the grounds for intervention through the kinds of tactical infrastructure for institutional change that I am exploring here. 

By institution I am gesturing to specific organizations founded for a religious, educational, professional, or social purpose, such as a university or research lab. An institution is a material constellation of bodies, affects, histories, technologies, infrastructures and cultures which is organized. By organization I mean a specifically ordered, assembled, and structured group of people for a particular purpose, for example a business or government department or a political organization.[4] Understanding the relationship between infrastructure to organization and then to the form of the institution is crucial to constructing progressive institutions and providing the possibility of contestation of institutional form, not just their actions.[5] Hence, to turn to the question of infrastructure critique is to also turn towards ideology critique, and the subsequent possibility for unbuilding and, if necessary, creating counter-infrastructures or tactical infrastructures.[6] To do this it seems to me we have to avoid the dangers of a form of infrastructural fetishism that seeks to show the multiplicity of infrastructures through a project of aestheticisation of infrastructure, whether through photography, data visualisations, or any other media form. What is important is identifying how humans act within institutions and in doing so how they create and recreate fundamental elements of social interaction – i.e. how do thought-collectives and thought-styles adapt? – but also if we change the fundamental structures of infrastructures supporting institutions and their organization, can we strengthen the agencies of actors and the institution to work progressively. 
Notes
[1] There is a need for more ideology critique in relation to infrastructures, making use of the work of STS, software studies, sociology of technology, etc. With the ongoing critical turn in relation to algorithms, data, software and code we should hope to see more work done in infrastructure critique. 
[2] Garcia and Lovink write that “Tactical Media are what happens when the cheap ‘do it yourself’ media, made possible by the revolution in consumer electronics and expanded forms of distribution (from public access cable to the internet) are exploited by groups and individuals who feel aggrieved by or excluded from the wider culture. Tactical media do not just report events, as they are never impartial they always participate and it is this that more than anything separates them from mainstream media… above all [it is] mobility that most characterizes the tactical practitioner. The desire and capability to combine or jump from one media to another creating a continuous supply of mutants and hybrids. To cross boarders, connecting and re-wiring a variety of disciplines and always taking full advantage of the free spaces in the media that are continually appearing because of the pace of technological change and regulatory uncertainty” (Garcia and Lovink 1997).
[3] Here there are normative questions here in regard to scale and methodology, particularly in relation to disciplinary biases towards certain scales and approaches. More so considering the way in which the digital creates multi-scalar potentials for research methods – it is interesting to consider the way in which scales still performs a “truth” directing role nonetheless.
[4] There are strong connections here to Lovink and Rossiter’s (2013) notion of Orgnets. 
[5] This is to radicalise the notion of research infrastructures in the digital humanities, for example, where debates over the proper form of research infrastructures tend towards instrumental concerns over technical construction and deployment rather than normative or political issues. For example, many universities select their technical support infrastructures from large proprietary software companies, so in the case of email, Microsoft or IBM might be chosen to allow “integration” with their Office suite, but without considering the wider issues of data sharing, transatlantic movement of student data and work, data mining and so forth. Alan Liu is currently working very interestingly on some of these problematics under the notion of critical infrastructure studies, see Liu (2016). 
[6] This article has been inspired by much fruitful discussion with Michael Dieter, who I have been working with on the notion of critical infrastructures, particularly dark infrastructures, alter-infrastructures and vernacular infrastructures represented by Aaaaarg, Monoskop, Sci-Hub and related infrastructure projects. But we might also think about hacking “toolkits”, crypto parties, hack-labs, copy-parties, data activism and maker spaces as further examples of new structural environments for new forms of knowledge creation, dissemination and storage. Mapping the underlying infrastructures is an important task for thinking about how tactical infrastructures might be deployed. 

Bibliography
Benkler, Y (2006) The Wealth of Networks. London: Yale University Press. Bergson, H. (1998) Creative Evolution. New York: Dover Publications.
Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury
Berry, D. M. and Dieter, M. (2015) Postdigital Aesthetics: Art, Computation and Design, Basingstoke: Palgave. 
Feenberg, A. (2013) Marcuse’s Phenomenology: Reading Chapter Six of One-Dimensional Man, Constellations, Volume 20, Number 4, pp. 604-614.
Fleck, L. (1979) Genesis and Development of a Scientific Fact, London: The University of Chicago Press.

Garcia, D. and Lovink, G. (1997) The ABC of Tactical Media, Nettime, accessed 15/09/16, http://www.nettime.org/Lists-Archives/nettime-l-9705/msg00096.html

Gillespie T (2010) The politics of “platforms”, New Media & Society 12(3): 347–364.

Hu T.-H. (2015) A Prehistory of the Cloud. Cambridge, MA: The MIT Press.

Liu, A. (2016) Against the Cultural Singularity: Digital Humanities and Critical Infrastructure Studies, Youtube, accessed 15/09/16, https://www.youtube.com/watch?v=KHnJCc2Sc4Y

Lovink, G. (2012) What is Social in Social Media?, e-flux journal, #40, December 2012. 
Lovink, G. and Rossiter, N (2013) Organised Networks: Weak Ties to Strong Links, Occupy Times, accessed 04/04/2014, http://theoccupiedtimes.org/?p=12358
Merton, R. K. (1948) The Self-Fulfilling Prophecy, The Antioch Review, Vol. 8, No. 2 (Summer, 1948), pp. 193-210 
Parks, L. (2015) “Stuff you can kick”: Towards a theory of Media Infrastructures. In Between the humanities and the digital, (Eds, Svensson, P. & Goldberg, D.T.) MIT Press, Cambridge, Massachusetts, pp. 355-373.
Parks, L. and Starosielski, N. (2015) Signal Traffic: Critical Studies of Media Infrastructures, Illinois: University of Illinois Press.

Plantin, J. C., Lagoze, C.,  Edwards, P. N., and Sandvig, C. (2016) Infrastructure studies meet platform studies in the age of Google and Facebook, New Media & Society August 4, 2016, accessed 16/09/16, http://nms.sagepub.com/content/early/2016/08/02/1461444816661553.abstract

Riley, R. (2009) Tactical Media, Minneapolis: University of Minnesota Press. 

Starosielski N. (2015) The Undersea Network. Durham, NC: Duke University Press.

Signal Lab

As part of the Sussex Humanities Lab, at the University of Sussex, we are developing a research group clustered around information theoretic themes of signal/noise, signal transmission, sound theorisation, musicisation, simulation/emulation, materiality, game studies theoretic work, behavioural ideologies and interface criticism. The cluster is grouped under the label Signal Lab and we aim to explore the specific manifestations of the mode of existence of technical objects. This is explicitly a critical and political economic confrontation with computation and computational rationalities.

Signal Lab will focus on techno-epistemological questions around the assembly and re-assembley of past media objects, postdigital media and computational sites. This involves both attending to the impressions of the physical hardware (as a form of techne) and the logical and mathematical intelligence resulting from software (as a form of logos). Hence we aim to undertake an exploration of the technological conditions of the sayable and thinkable in culture and how the inversion of reason as rationality calls for the excavation of how techniques, technologies and computational medias direct human and non-human utterances without reducing techniques to mere apparatuses.

This involves the tracing of the contingent emergence of ideas and knowledge in systems in space and time, to understand distinctions between noise and speech, signal and absence, message and meaning. This includes an examination of the use of technical media to create the exclusion of noise as both a technical and political function and the relative importance of chaos and irregularity within the mathematization of chaos itself. It is also a questioning of the removal of the central position of human subjectivity and the development of a new machine-subject in information and data rich societies of control and their attendant political economies.

Within the context of information theoretic questions, we revisit the old chaos, and the return of the fear of, if not aesthetic captivation toward, a purported contemporary gaping meaninglessness. Often associated with a style of nihilism, a lived cynicism and jaded glamour of emptiness or misanthropy. Particularly in relation to a political aesthetic that desires the liquidation of the subject which in the terms of our theoretic approach, creates not only a regression of consciousness but also the regression to real barbarism. That is, data, signal, mathematical noise, information and computationalism conjure the return of fate and the complicity of myth with nature and a concomitant total immaturity of society and a return to a society in which self-relfection can no longer open its eyes, and in which the subject not only does not exist but instead becomes understood as a cloud of data points, a dividual and a undifferentiated data stream.

Signal Lab will therefore pay attention both to the synchronic and diachronic dimensions of computational totality, taking the concrete meaningful whole and essential elements of computational life and culture. This involves the explanation of the emergence of the present given social forces in terms of some past structures and general tendencies of social change. That is, that within a given totality, there is a process of growing conflict among opposite tendencies and forces which constitutes the internal dynamism of a given system and can partly be examined at the level of behaviour and partly at the level of subjective motivation. This is to examine the critical potentiality of signal in relation to the possibility of social forces and their practices and articulations within a given situation and how they can play their part in contemporary history. This potentially opens the door to new social imaginaries and political possibility for emancipatory politics in a digital age.

The Sussex Humanities Lab

The Sussex Humanities Lab is a new programme that will seek to position the University of Sussex at the forefront of theoretical and empirical work exploring the purported fundamental re-configuration of the humanities offered by computational technologies. As culture is re-born digital, old divisions that marked out criticism from history, music from paint, image from text, object from performance, have become increasingly problematic – legacies, perhaps, of the mediality of technologies like print and the structures inherited from the medieval university. When cultural production flows through the digital, the boundaries between different media and practices are reconfigured. This new formation calls for us to re-imagine the humanities, and to build fields of study that transcend the computational and the aesthetic, informed by new digital objects of study, rather than by inherited disciplinary approaches. As such, this programme does not merely suggest a newly transplanted digital humanities into the existing disciplinary structures of the university, but rather another digital humanities. One which connects to the theoretical concerns of new media, media studies, critical theory, software studies, digital media, cultural studies and medium theory, whilst continuing to draw on and reconfigure the humanities within a digital milieu. As such, this suggests a turn to critical digital humanities and with it a set of concerns that engage with notions of materiality, medium-specificity, cultural critique, computation, networks, archives, performance, practices, and new computational cultures.

Directors: Caroline Bassett (PI), David M. Berry (Co-I), Sally Jane Norman (Co-I), Tim Hitchcock (Co-I), and Rachel Thomson (Co-I).

/// Launching Sept 2015 ///

Flat Theory

The world is flat.[1] Or perhaps better, the world is increasingly “layers”. Certainly the augmediated imaginaries of the major technology companies are now structured around a post-retina notion of mediation made possible and informed by the digital transformations ushered in by mobile technologies that provide a sense of place, as well as a sense of management of complex real-time streams of information and data.

Two new competing computational interface paradigms are now deployed in the latest version of Apple and Google’s operating systems, but more notably as regulatory structures to guide the design and strategy related to corporate policy. The first is “flat design” which has been introduced by Apple through iOS 8 and OS X Yosemite as a refresh of the ageing operating systems’ human computer interface guidelines, essentially stripping the operating system of historical baggage related to techniques of design that disguised the limitations of a previous generation of technology, both in terms of screen but also processor capacity. It is important to note, however, that Apple avoids talking about “flat design” as its design methodology, preferring to talk through its platforms specificity, that is about iOS’ design or OS X’s design. The second is “material design” which was introduced by Google into its Android L, now Lollipop, operating system and which also sought to bring some sense of coherence to a multiplicity of Android devices, interfaces, OEMs and design strategies. More generally “flat design” is “the term given to the style of design in which elements lose any type of stylistic characters that make them appear as though they lift off the page” (Turner 2014). As Apple argues, one should “reconsider visual indicators of physicality and realism” and think of the user interface as “play[ing] a supporting role”, that is that techniques of mediation through the user interface should aim to provide a new kind of computational realism that presents “content” as ontologically prior to, or separate from its container in the interface (Apple 2014). This is in contrast to “rich design,” which has been described as “adding design ornaments such as bevels, reflections, drop shadows, and gradients” (Turner 2014).

I want to explore these two main paradigms – and to a lesser extent the flat-design methodology represented in Windows 7/8 and the, since renamed, Metro interface (now Microsoft Modern UI) – through a notion of a comprehensive attempt by both Apple and Google to produce a rich and diverse umwelt, or ecology, linked through what what Apple calls “aesthetic integrity” (Apple 2014). This is both a response to their growing landscape of devices, platforms, systems, apps and policies, but also to provide some sense of operational strategy in relation to computational imaginaries. Essentially, both approaches share an axiomatic approach to conceptualising the building of a system of thought, in other words, a primitivist predisposition which draws from both a neo-Euclidian model of geons (for Apple), but also a notion of intrinsic value or neo-materialist formulations of essential characteristics (for Google). That is, they encapsulate a version of what I am calling here flat theory. Both of these companies are trying to deal with the problematic of multiplicities in computation, and the requirement that multiple data streams, notifications and practices have to be combined and managed within the limited geography of the screen. In other words, both approaches attempt to create what we might call aggregate interfaces by combining techniques of layout, montage and collage onto computational surfaces (Berry 2014: 70).

The “flat turn” has not happened in a vacuum, however, and is the result of a new generation of computational hardware, smart silicon design and retina screen technologies. This was driven in large part by the mobile device revolution which has not only transformed the taken-for-granted assumptions of historical computer interface design paradigms (e.g. WIMP) but also the subject position of the user, particularly structured through the Xerox/Apple notion of single-click functional design of the interface. Indeed, one of the striking features of the new paradigm of flat design, is that it is a design philosophy about multiplicity and multi-event. The flat turn is therefore about modulation, not about enclosure, as such, indeed it is a truly processual form that constantly shifts and changes, and in many ways acts as a signpost for the future interfaces of real-time algorithmic and adaptive surfaces and experiences. The structure of control for the flat design interfaces is following that of the control society, is “short-term and [with] rapid rates of turnover, but also continuous and without limit” (Deleuze 1992). To paraphrase Deleuze: Humans are no longer in enclosures, certainly, but everywhere humans are in layers.

Apple uses a series of concepts to link its notion of flat design which include, aesthetic integrity, consistency, direct manipulation, feedback, metaphors, and user control (Apple 2014). Reinforcing the haptic experience of this new flat user interface has been described as building on the experience of “touching glass” to develop the “first post-Retina (Display) UI (user interface)” (Cava 2013). This is the notion of layered transparency, or better, layers of glass upon which the interface elements are painted through a logical internal structure of Z-axis layers. This laminate structure enables meaning to be conveyed through the organisation of the Z-axis, both in terms of content, but also to place it within a process or the user interface system itself.

Google, similarly, has reorganised it computational imaginary around a flattened layered paradigm of representation through the notion of material design. Matias Duarte, Google’s Vice President of Design and a Chilean computer interface designer, declared that this approach uses the notion that it “is a sufficiently advanced form of paper as to be indistinguishable from magic” (Bohn 2014). But magic which has constraints and affordances built into it, “if there were no constraints, it’s not design — it’s art” Google claims (see Interactive Material Design) (Bohn 2014). Indeed, Google argues that the “material metaphor is the unifying theory of a rationalized space and a system of motion”, further arguing:

The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space and in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts… Motion respects and reinforces the user as the prime mover… [and together] They create hierarchy, meaning, and focus (Google 2014). 

This notion of materiality is a weird materiality in as much as Google “steadfastly refuse to name the new fictional material, a decision that simultaneously gives them more flexibility and adds a level of metaphysical mysticism to the substance. That’s also important because while this material follows some physical rules, it doesn’t create the “trap” of skeuomorphism. The material isn’t a one-to-one imitation of physical paper, but instead it’s ‘magical'” (Bohn 2014). Google emphasises this connection, arguing that “in material design, every pixel drawn by an application resides on a sheet of paper. Paper has a flat background color and can be sized to serve a variety of purposes. A typical layout is composed of multiple sheets of paper” (Google Layout, 2014). The stress on material affordances, paper for Google and glass for Apple are crucial to understanding their respective stances in relation to flat design philosophy.[2]

Glass (Apple): Translucency, transparency, opaqueness, limpidity and pellucidity. 

Paper (Google): Opaque, cards, slides, surfaces, tangibility, texture, lighted, casting shadows. 

Paradigmatic Substances for Materiality

In contrast to the layers of glass that inform the logics of transparency, opaqueness and translucency of Apple’s flat design, Google uses the notion of remediated “paper” as a digital material, that is this “material environment is a 3D space, which means all objects have x, y, and z dimensions. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the viewer. Every sheet of material occupies a single position along the z-axis and has a standard 1dp thickness” (Google 2014). One might think then of Apple as painting on layers of glass, and Google as thin paper objects (material) placed upon background paper. However a key difference lies in the use of light and shadow in Google’s notion which enables the light source, located in a similar position to the user of the interface, to cast shadows of the material objects onto the objects and sheets of paper that lie beneath them (see Jitkoff 2014). Nonetheless, a laminate structure is key to the representational grammar that constitutes both of these platforms.

Armin Hofmann, head of the graphic design department at the Schule für Gestaltung Basel (Basel School of Design) and was instrumental in developing the graphic design style known as  the Swiss Style. Designs from 1958 and 1959. 

Interestingly, both design strategies emerge from an engagement with and reconfiguration of the principles of design that draw from the Swiss style (sometimes called the International Typographic Style) in design (Ashghar 2014, Turner 2014).[3] This approach emerged in the 1940s, and

mainly focused on the use of grids, sans-serif typography, and clean hierarchy of content and layout. During the 40’s and 50’s, Swiss design often included a combination of a very large photograph with simple and minimal typography (Turner 2014).

The design grammar of the Swiss style has been combined with minimalism and the principle of “responsive design”, that is that the materiality and specificity of the device should be responsive to the interface and context being displayed. Minimalism is a “term used in the 20th century, in particular from the 1960s, to describe a style characterized by an impersonal austerity, plain geometric configurations and industrially processed materials” (MoMA 2014). Robert Morris, one of the principle artists of Minimalism, and author of the influential Notes on Sculpture used “simple, regular and irregular polyhedrons. Influenced by theories in psychology and phenomenology” which he argued “established in the mind of the beholder ‘strong gestalt sensation’, whereby form and shape could be grasped intuitively” (MoMA 2014).[4]

Robert Morris: Untitled (Scatter Piece), 1968-69, felt, steel, lead, zinc, copper, aluminum, brass, dimensions variable; at Leo Castelli Gallery, New York. Photo Genevieve Hanson. All works this article © 2010 Robert Morris/Artists Rights Society (ARS), New York.

The implications of these two competing world-views are far-reaching in that much of the worlds initial contact, or touch points, for data services, real-time streams and computational power is increasingly through the platforms controlled by these two companies. However, they are also deeply influential across the programming industries, and we see alternatives and multiple reconfigurations in relation to the challenge raised by the “flattened” design paradigms. That is, they both represent, if only in potentia, a situation of a power relation and through this an ideological veneer on computation more generally. Further, with the proliferation of computational devices – and the screenic imaginary associated with them in the contemporary computational condition – there appears a new logic which lies behind, justifies and legitimates these design methodologies.

It seems to me that these new flat design philosophies, in the broad sense, produce an order in precepts and concepts in order to give meaning and purpose not only in the interactions with computational platforms, but also more widely in terms of everyday life. Flat design and material design are competing philosophies that offer alternative patterns of both creation and interpretation, which are meant to have not only interface design implications, but more broadly in the ordering of concepts and ideas, the practices and the experience of computational technologies broadly conceived. Another way to put this could be to think about these moves as being a computational founding, the generation of, or argument for, an axial framework for building, reconfiguration and preservation.

Indeed, flat design provides and more importantly serves, as a translational or metaphorical heuristic for both re-presenting the computational, but also teaches consumers and users how to use and manipulate new complex computational systems and stacks. In other words, in a striking visual technique flat design communicates the vertical structure of the computational stack, on which the Stack corporations are themselves constituted. But also begins to move beyond the specificity of the device as privileged site of a computational interface interaction from beginning to end. For example, interface techniques are abstracted away from the specificity of the device, for example through Apple’s “handoff” continuity framework which also potentially changes reading and writing practices in interesting ways.

These new interface paradigms, introduced by the flat turn, have very interesting possibilities for the application of interface criticism, through unpacking and exploring the major trends and practices of the Stacks, that is, the major technology companies. I think that further than this, the notion of layers are instrumental in mediating the experience of an increasingly algorithmic society (e.g. think dashboards, personal information systems, quantified self, etc.), and as such provide an interpretative frame for a world of computational patterns but also a constituting grammar for building these systems in the first place. There is an element in which the notion of the postdigital may also be a useful way into thinking about the question of the link between art, computation and design given here (see Berry and Dieter, forthcoming) but also the importance of notions of materiality for the conceptualisation deployed by designers working within both the flat design and material design paradigms – whether of paper, glass, or some other “material” substance.[5]

Notes

[1] Many thanks to Michael Dieter and Søren Pold for the discussion which inspired this post. 
[2] The choice of paper and glass as the founding metaphors for the flat design philosophies of Google and Apple raise interesting questions for the way in which these companies articulate the remediation of other media forms, such as books, magazines, newspapers, music, television and film, etc. Indeed, the very idea of “publication” and the material carrier for the notion of publication is informed by the materiality, even if only a notional affordance given by this conceptualisation. It would be interesting to see how the book is remediated through each of the design philosophies that inform both companies, for example. 
[3] One is struck by the posters produced in the Swiss style which date to the 1950s and 60s but which today remind one of the mobile device screens of the 21st Century. 
[4] There is also some interesting links to be explored between the Superflat style and postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime, both in terms of the aesthetic but also in relation to the cultural moment in which “flatness” is linked to “shallow emptiness”.
[5] There is some interesting work to be done in thinking about the non-visual aspects of flat theory, such as the increasing use of APIs, such as the RESTful api, but also sound interfaces that use “flat” sound to indicate spatiality in terms of interface or interaction design.  

Bibliography

Apple (2014) iOS Human Interface Guidelines, accessed 13/11/2014, https://developer.apple.com/library/ios/documentation/userexperience/conceptual/mobilehig/Navigation.html

Ashghar, T. (2014) The True History Of Flat Design, accessed 13/11/2014, http://www.webdesignai.com/flat-design-history/

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury.

Berry, D. M. and Dieter, M. (forthcoming) Postdigital Aesthetics: Art, Computation and Design, Basingstoke: Palgrave Macmillan.

Bohn, D. (2014) Material world: how Google discovered what software is made of, The Verge, accessed 13/11/2014, http://www.theverge.com/2014/6/27/5849272/material-world-how-google-discovered-what-software-is-made-of

Cava, M. D. (2013) Jony Ive: The man behind Apple’s magic curtain, USA Today, accessed 1/1/2014, http://www.usatoday.com/story/tech/2013/09/19/apple-jony-ive-craig-federighi/2834575/

Deleuze, G. (1992) Postscript on the Societies of Control, October, vol. 59: 3-7.

Google (2014) Material Design, accessed 13/11/2014, http://www.google.com/design/spec/material-design/introduction.html

Google Layout (2014) Principles, Google, accessed 13/11/2014, http://www.google.com/design/spec/layout/principles.html

Jitkoff, N. (2014) This is Material Design, Google Developers Blog, accessed 13/11/2014,  http://googledevelopers.blogspot.de/2014/06/this-is-material-design.html

MoMA (2014) Minimalism, MoMA, accessed 13/11/2014, http://www.moma.org/collection/details.php?theme_id=10459

Turner, A. L. (2014) The history of flat design: How efficiency and minimalism turned the digital world flat, The Next Web, accessed 13/11/2014, http://thenextweb.com/dd/2014/03/19/history-flat-design-efficiency-minimalism-made-digital-world-flat/

Interview with David M. Berry at re:publica 2013

Open science interview at re:publica conference in Berlin, 2013, by Kaja Scheliga.

Kaja ScheligaSo to start off…what is your field, what do you do?


David M. Berry: My field is broadly conceived as digital humanities or software studies. I focus in particular on critical approaches to understanding technology, through theoretical and philosophical work, so, for example, I have written a book called Philosophy of Software and I have a new book called Critical Theory and The Digital but I am also interested in the multiplicity of practices within computational culture as well, and the way the digital plays out in a political economic context.

KS: Today, here at the re:publica you talked about digital humanities. What do you associate with the term open science?

DB: Well, open science has very large resonances with Isaiah Berlin’s notion of the open society, and I think the notion of open itself is interesting in that kind of construction, because it implies a “good”. To talk about open science implies firstly that closed science is “bad”, that science should be somehow widely available, that everything is published and there is essentially a public involvement in science. It has a lot of resonances, not necessarily clear. It is a cloudy concept. 

KS: So where do you see the boundary between open science and digital humanities? Do they overlap or are they two separate fields? Is one part of the other?


DB: Yes, I think, as I was talking in the previous talk about how digital humanities should be understood within a constellation, I think open science should also be understood in that way. There is no single concept as such, and we can bring up a lot of different definitions, and practitioners would use it in multiple ways depending on their fields. But I think, there is a kind of commitment towards open access, the notion of some kind of responsibility to a public, the idea that you can have access to data and to methodologies, and that it is published in a format that other people have access to, and also there is a certain democratic value that is implicit in all of these constructions of the open: open society, open access, open science, etc. And that is really linked to a notion of a kind of liberalism that the public has a right, and indeed has a need to understand.  And to understand in order to be the kind of citizen that can make decisions themselves about science. So in many ways it is a legitimate discourse, it is a linked and legitimating discourse about science itself, and it is a way of presenting science as having a value to society.

KS:  But is that justified, do you agree with this concept? Or do you rather look at it critically?

DB: Well, I am a critical theorist. So, for me these kinds of concepts are never finished. They always have within them embedded certain kinds of values and certain kinds of positions. And so for me it is an interesting concept and I think “open science” is interesting in that it emerges at a certain historical juncture, and of course with the notion of a “digital age” and all the things that have been talked about here at the re:publica, everyone is so happy and so progressive and the future looks so bright – apparently…

KS: Does it?

DB: Yes, well, from the conference perspective, because re:publica is a technology conference, there is this whole discourse of progress – which is kind of an American techno-utopian vision that is really odd in a European context – for me anyway. So, being a critical theorist, it does not necessarily mean that I want to dismiss the concept, but I think it is interesting to unpick the concept and see how it plays out in various ways. In some ways it can be very good, it can be very productive, it can be very democratic, in other ways it can be used for example as a certain legitimating tool to get funding for certain kinds of projects, which means other projects, which are labelled “closed”, are no longer able to get funded. So, it is a complex concept, it is necessarily “good” or “bad”.

KS: So, not saying ‘good’ or ‘bad’, but looking at the dark side of say openness, where do you see the limits? Or where do you see problem zones?

DB: Well, again, to talk about the “dark side,” it is kind of like Star Wars or something. We have to be very careful with that framework, because the moment you start talking about the dark side of the digital, which is a current, big discussion going on, for example, in the dark side of the digital humanities, I think it is a bit problematic. That is why thinking in terms of critique is a much better way to move forward. So for me, what would be more interesting would be to look at the actual practices of how open science is used and deployed. Which practitioners are using it? Which groups align themselves with it? Which policy documents? And which government policies are justified by rolling back to open science itself? And then, it is important to perform a kind of genealogy of the concept of “open science” itself. Where does it come from? What is it borrowing from? Where is the discussion over that term? Why did we come to this term being utilised in this way? And I think that then shows us the force of a particular term, and places it within an historical context. Because open science ten years ago may have meant one thing, but open science today might mean something different. So, it is very important we ask these questions.

KS: All right. And are there any open science projects that come to mind, spontaneously, right now?


DB: I’m not sure they would brand themselves as “open science” but I think CERN would be for me a massive open science project, and which likes to promote itself in these kinds of ways. So, the idea of a public good, publishing their data, having a lot of cool things on their website the public can look at, but ultimately, that justification for open science is disconnected because, well, what is the point of finding the Higgs Boson, what is the actual point, where will it go, what will it do? And that question never gets asked because it is open science, so the good of open science makes it hard for us to ask these other kinds of questions. So, those are the kinds of issues that I think are really important. And it is also interesting in terms of, for example, there was an American version of CERN which was cancelled. So why was CERN built, how did open science enable that? I mean, we are talking huge amounts of money, large amounts of effort, would this money have been better transferred to solving the problem of unemployment, you know, we are in a fiscal crisis at the moment, a financial catastrophe and these kinds of questions get lost because open science itself gets divorced from its political economic context.

KS: Yes. But interesting that you say that within open science certain questions are maybe not that welcome, so actually, it seems to be at certain places still pretty closed, right?

DB: Well, that is right, open itself is a way of closing down other kinds of debates. So, for example, in the programming world open source was promoted in order not to have a discussion about free software, because free software was just too politicised for many people. So using the term open, it was a nice woolly term that meant everything to a lot of different people, did not feel political and therefore could be promoted to certain actors, many governments, but also corporations. And people sign up to open source because it just sounds – “open source, yes, who is not for open source?” I think if you were to ask anyone here you would struggle to find anybody against open source. But if you ask them if they are for free software a lot of people would not know what it is. That concept has been pushed away. I think the same thing happens in science by these kinds of legitimating discourses. Certain kinds of critical approaches get closed down. I think you would not be welcomed if at the CERN press conference for the Higgs boson you would put up your hand and ask: “well actually, would it not have been better spending this money on solving poverty?” That would immediately not be welcomed as a legitimate line of questioning.  

KS: Yes, right. Okay, so do you think science is already open, or do we need my openness? And if so, where?

DB: Well, again, that is a strange question that assumes that I know what “open” is. I mean openness is a concept that changes over time. I think that the project of science clearly benefits from its ability to be critiqued and checked, and I do not necessarily just want to have a Popperian notion of science here – it is not just about falsification – but I think verification and the ability to check numbers is hugely important to the progress of science. So that dimension is a traditional value of science, and very important that it does not get lost. Whether or not rebranding it as open science helps us is not so straightforward. I am not sure that this concept does much for us, really. Surely it is just science? And approaches that are defined as “closed” are perhaps being defined as non-science.

KS: What has the internet changed about science and working in research?

DB: Well, I am not a scientist, so –   

KS: – as in science, as in academia. Or, what has the internet changed in research?

DB: Well, this is an interesting question. Without being too philosophical about it I hope, Heidegger was talking about the fact that science was not science anymore, and actually technology had massively altered what science was. Because science now is about using mechanisms, tools, digital devices, and computers, in order to undertake the kinds of science that is possible. So it becomes this entirely technologically driven activity. Also, today science has become much more firmly located within economic discourse, so science needs to be justified in terms of economic output, for example. It is not just the internet and the digital that have introduced this, there are larger structural conditions that I think that are part of this. So, what has the Internet or the web changed about science? One thing is allowing certain kinds of scientism to be performed in public. And so you see this playing out in particular ways, certain movements – really strange movements – have emerged that are pro-science and they just seek to attack people they see as anti-science. So, for example, the polemical atheist movement led by Richard Dawkins argues that that it is pro-science and anyone who is against it is literally against science – they are anti-science. This is a very strange way of conceptualising science. And some scientists I think are very uncomfortable with the way Dawkins is using rhetoric, not science, to actually enforce and justify his arguments. And another example is the “skeptics” movement, another very “pro-science” movement that has very fixed ideas about what science is. So science becomes a very strong, almost political philosophy, a scientism. I am interested in exploring how digital technologies facilitate a technocratic way of thinking: a certain kind of instrumental rationality, as it were.

KS: How open is your research, how open is your work? Do you share your work in progress with your colleagues?

DB: Well, as an academic, sharing knowledge is a natural way of working – we are very collaborative, go to conferences, present new work all the time, and publish in a variety of different venues. In any case, your ability to be promoted as an academic, to become a professor, is based on publishing, which means putting work out there in the public sphere which is then assessed by your colleagues. So the very principles of academia are about publishing, peer review, and so on and so forth. So, we just have to be a bit careful about the framing of the question in terms of: “how ‘open’ is your work?”, because I am not sure how useful that question is inasmuch as it is too embedded within certain kinds of rhetorics that I am a little bit uncomfortable with. So the academic pursuit is very much about sharing knowledge – but also knowledge being shared.

KS: Okay. I was referring to, of course, when you do work and when you have completed your research you want to share it with others because that is the point of doing the research in the first place, to find something out and then to tell the world look this is what I found out, right?

DB: Possibly. No.

KS: No?

DB: This is what I am saying. I mean –

KS: I mean, of course in a simplified way.

DB: Well, disciplines are not there to “tell the world”. Disciplines are there to do research and to create research cultures. What is the point of telling the world? The world is not necessarily very interested. And so you have multiple publics – which is one way of thinking about it. So one of my publics, if you like, is my discipline, and cognate disciplines, and then broader publics like re:publica and then maybe the general public. And there are different ways of engaging with those different audiences. If I was a theoretical physicist for example, and I publish in complex mathematical formulae,  I can put that on the web but you are not really going to get an engagement from a public as such. That will need to be translated. And therefore maybe you might write a newspaper article which translates that research for a different public. So, I think it is not about just throwing stuff on the web or what have you. I think that would be overly simplistic. It is also about translation. So do I translate my research? Well I am doing it now. I do it all the time. So, I talk to Ph.D. students and graduates, that is part of the dissemination of information, which is, I think really what you are getting at. How do you disseminate knowledge?

KS: Exactly. And knowledge referring not only to knowledge that is kind of settled and finished, you know, I have come to this conclusion, this is what I am sharing, but also knowledge that is in the making, in the process, that was what I was referring to.

DB: Sure, yes. I mean, good academics do this all the time. And I am talking particularly about academia here. I think good academics do research and then they are teaching and of course these two things overlap in very interesting ways. So if you are very lucky to have a good scholar as a professor you are going to benefit from seeing knowledge in the making. So that is a more general question about academic knowledge and education. But the question of knowledges for publics, I think that is a different question and it is very, very complex and you need to pin down what it is you want to happen there. In Britain we have this notion of the public engagement of science and that is about translation. Let’s say you do a big research project that is very esoteric or difficult to understand, and then you write a popular version of it – Stephen Hawking is a good example of this – he writes books that people can read and this has major effects beyond science and academia itself. I think this is hugely important, both in terms of understanding how science is translated, but also how popular versions of science may not themselves be science per se.

KS: So, what online tools do use for your research?

DB: What online tools? I do not use many online tools as such. I mean I am in many ways quite a traditional scholar, I rely on books – I will just show you my notes. I take notes in a paper journal and I write with a fountain pen which I think is a very traditional way of working. The point is that my “tools” are non-digital, I hardly ever digitise my notes and I think it is interesting to go through the medium of paper to think about the digital, because digital tools seem to offer us solutions and we are very caught up in the idea that the digital provides answers. I think we have to pause a little bit, and paper forces you to slow down – that is why I like it. It is this slowing down that I think is really important when undertaking research, giving time to think by virtue of making knowledge embodied. Obviously, when it comes to collecting data and following debates I will use digital tools. Google of course is one of the most important, Google scholar and social media are really interesting tools, Gephi is very interesting social network analysis tool. I use Word and Excel as does pretty much everybody else. So the important issue is choosing which digital tools to use in which contexts. One thing I do much less of is, for example, the kind of programming were people write APIs and scrapers and all this kind of approaches, I have been involved in some projects doing that but I just do not have time to construct those tools, so I sometimes other people’s software (such as digital methods tools).

Notes, reproduced in Lewandowska and Ptak (2013)


KS: Okay, and how about organising ideas, do you do that on paper? Or for example do you use a tool for task managing?

DB: Always paper. If you have a look in my journal you can see that I can choose any page and there is an organisation of ideas going on here. For me it is a richer way to work through ideas and concepts  Eventually, you do have to move to another medium – you know I do not type my books on typewriters! – I use a word processor, for example. So eventually I do work on a computer, but by that point I think the structure is pretty much in my head but mediated through paper and ink – the computer is therefore an inscription device at the end of thinking. I dwell on paper, as it were, and then move over into a digital medium. You know, I do not use any concept mapping softwares, I just find them too clumsy and too annoying actually. 

KS: Okay, so what puts you off not using / not being tempted by using all those tools that offer you help and offer to make you more productive?

DB: Well, because firstly, I do not want to be more productive, and secondly I do not think they help. So the first thing I tell my new students, including new Ph.D. students, is: buy a note book and a pen and start taking notes. Do not think that the computer is your tool, or your servant. The computer will be your hindrance, particularly in the early stages of a Ph.D. It is much more important to carefully review and think through things. And that is actually the hardest thing to do, especially in this world of tweets and messages and emails – distractions are everywhere. There are no tweets in my book, thankfully, and it is the slowness and leisureliness that enables me to create a space for thinking. It is a good way of training your mind to pause and think before responding.

KS: So, you are saying that online tools kind of distract us from thinking and actually we think that we are doing a lot of stuff but actually we are not doing that much, right?

DB: Well, the classic problem is students that, for example, think they are doing an entirely new research project and map it all out in a digital tool that allows you to do fancy graphs, etc. – but they are not asking any kind of interesting research questions because they have not actually looked at the literature and they do not know the history of their subject. So it is very important that we do this, indeed some theorists have made the argument that we are forgetting our histories. And I think this is very true. The temptation to be in the future, to catch the latest wave or the latest trend affects Ph.D. students and academics as much as everybody else. And there are great dangers from chasing those kinds of solutions. Academia used to be about taking your time and being slow and considering things. And I think in the digital age academia’s value is that it can continue to do that, at least I hope so.

KS: Okay, but is there not a danger that if you say: okay, I am taking my time, I am taking my paper and my pen while others are hacking away, being busy using all those online tools, and in a way you could say okay that speeds up some part of research, at least when you draw out the cumulative essence of it, can you afford to invest the time?

DB: Well, it is not either or. It is both. The trouble is, I find anyway, with Ph.D. students, their rush to use the digital tools is to prevent them from having to use the paper. And, a classic example of this is Endnote. Everybody rushes to use Endnote because they do not like doing bibliographies. But actually, doing the bibliography by hand is one of the best things you can do because you learn your field’s knowledge, and you immediately recognise names because you are the one typing them in. Again this is a question of embodiment. When you leave that to a computer program to do it for you, laziness emerges – and you just pick and choose names to scatter over your paper. So, I am not saying you should not use such tools, I am saying that you should maybe do both. I mean, I never use these tools to construct bibliographies, I do them by hand because it encourages me to think through, what about this person are they really contributing, what do they add? And I think that is really important.

KS: Although, it probably should be more about, okay what do I remember this persons writing, and what have they contributed and not so much about whose name sounds fancy and which names do I need to drop here.

DB: Totally. Well, there has been some interesting work on this. Researchers have undertaken bibliometric analysis to show how references are used in certain disciplines and how common citations crop up again and again because they were used in previous papers and researchers feel the need to mention them again – so it becomes a name-checking exercise. Interestingly, few people go back and read these original canonical papers. So it is really important to read early work in a field, and place it within an historical context and trajectory, if one is to make sense of the present.

KS: A last question, I want to ask you about collaborative writing, do you write with other people and if so, how does that work? Where do you see advantages and where do you see possible trouble?

DB: Yes, I do. I have been through the whole gamut of collaborative writing, so I have seen both the failures and the successes. Collaborative writing is never easy, first and foremost. Particularly I think for humanities’ academics, because we are taught and we are promoted on the basis of our name being on the front of a paper or on the cover of a book. This obviously adds its own complications, plus you know academics tend to be very individualistic, and there is always questions about –

KS: …in spite of all the collaboration, right?


DB: Indeed, yes of course, I mean that is just the academic way, but I think you need that, because writing a book requires you to sit in a room for months and months and months and the sun is shining, everyone else having fun and you are sitting there in a gloomy room typing away, so you need that kind of self-drive and belief, and that, of course, causes frictions between people. So I have tried various different methods of working with people, but one method I found particularly interesting is a method called booksprinting. It is essentially a time-boxed process where you come together with, let us say, four or five other scholars, you are locked in a room for the week (figuratively speaking!), except to sleep and you eat together, write together, concept map and develop a book, collaboratively. And then the book that is produced is jointly authored, there is no arguments over that, if you do not agree you can leave, but the point is that the collaborative output is understood and bought into by all the participants. Now, to many academics this sounds like absolute horror, and indeed when I was first asked if I would like to be involved I was sceptical – I went along but I was sure this was going to be a complete failure. However it was one of the most interesting collaborative writing processes I have been involved in. I have taken part in two book sprints to date (three including 2014). You are welcome to have a look at the first book, it is called New Aesthetic New Anxieties. It is amazing how productive those kinds of collaborative writing processes can be. But it has to be a managed process. So, do check out booksprinting, it is very interesting – see also Imaginary museums, Computationality & the New Aesthetic and On Book Sprints.

KS: Okay, but then for that to work what do you actually / from your experience, can you draw out factors that make it work?

DB: Sure. The most important factor is having a facilitator, so someone who does not write. And the facilitators role is to make sure that everybody else does write.  And that is an amazing ability, a key person, because they have to manage difficult people and situations – it is like herding cats. Academics do not like to be pushed, for example. And the facilitator I have worked with, he is very skilled at this kind of facilitation. The second thing is the kinds of writing that you do and how you do it. The booksprinting process I have been involved in has been very paper-based, so again there is a lot of paper everywhere, there are post-it notes, there is a lot of sharing of knowledge, and this is probably the bit you are going to find interesting: There is, nonetheless, a digital tool which enables you to write collaboratively. It is a cleverly written tool, it has none of the bells and whistles, it is very utilitarian and really focuses the writing process and working together. And, having seen this used out on two different booksprints, I can affirm that it does indeed help the writing process. I recommend you have a look.

KS: So, what is the tool?

DB: It is called Booktype. And Adam Hyde is the facilitator who developed the process of Book Sprints, and is also one of the developers of the software.

KS: Okay, interesting. Any questions? Or any question I did not ask you, anything you want to add that we have missed out, any final thoughts? Any questions for me?

DB: Yes, I do think that a genealogy of “open science” is important and your questions are really interesting because they are informed by certain assumptions about what open science is. In other words, there is a certain position you are taking which you do not make explicit, and which I find interesting. So it might be useful to reflect on how “open science” needs to critically unpacked further.

KS: Okay, great, thank you very much.

DB: My pleasure.

KS: Thanks.

DB: Thank you.






Interview archived at Zenodo. Transcript corrected from the original to remove errors and clarify terms and sentences. 

On Latour’s Notion of the Digital

Bruno Latour at Digital Humanities 2014

Bruno Latour, professor at Sciences Po and director of the TARDE program (Theory of Actor-network and Research in Digital Environments), recently outlined his understanding of the digital in an interesting part of his plenary lecture at Digital Humanities 2014 conference. He was honest in accepting that his understanding may itself be a product of his own individuation and pre-digital training as a scholar which emphasised close-reading techniques and agonistic engagement around a shared text (Latour 2014). Nonetheless, in presenting his attempt to produce a system of what we might call augmented close-reading in the AIME system, he was also revealing about how the digital was being deployed methodologically and his notion of the digital’s ontological constitution.[1]

Unsurprisingly, Latour’s first move was to deny the specificity of the digital as a separate domain as such, highlighting both the materiality of the digital and its complex relationship with the analogue. He described both the analogue structures that underpin the digital processing that makes the digital possible at all (the materials, the specific electrical voltage structures and signalling mechanisms, the sheer matter of it all), but also the digital’s relationship to a socio-technical environment. In other words, he swiftly moved away from what we might call the abstract materiality of the digital, its complex layering over an analogue carrier and instead reiterated the conditions under which the existing methodological approach of actor-network theory was justified – i.e. digital forms part of a network, is “physical” and material, requires a socio-techical environment to function, is a “complex function”, and so on.

Slide drawn from Latour (2014)

It would be too strong, perhaps, to state that Latour denied the specificity of the digital as such, but rather through what we might unkindly call a sophisticated technique of bait and switch and the use of a convincingly deployed visualisation of what the digital “really” is, courtesy of an image drawn from Cantwell-Smith (2003) the digital as not-physical was considered to have been refuted. Indeed, this approach to the digital echoes his earlier statements from 1997 about the digital, such that Latour argues,[2]

I do not believe that computers are abstract… there is (either) 0 and (or) 1 has absolutely no connection with the abstractness. It is actually very concrete, never 0 and 1 (at the same time)… There is only transformation. Information as something which will be carried through space and time, without  deformation, is a complete myth. People who deal with the technology will actually use the practical notion of transformation. From the same bytes, in terms of ‘abstract encoding’, the output you get is entirely different, depending on  the medium  you use. Down with information (Lovink and Schultz 1997).

This is not a new position for Latour, indeed in earlier work he has stated “actually there is nothing entirely digital in digital computers either!” (original emphasis, Latour 2010a). Whilst this may well be Latour’s polemical style getting rather out of hand, it does raise the question about what it is that is “digital” for Latour and therefore how this definition enables him to make such strong claims. One is tempted to suppose that it is the materiality of the 0 and 1s that Cantwell Smith’s diagram points towards that enables Latour to dismiss out of hand the complex abstract digitality of the computer as an environment, which although not immaterial, still is located through a complex series of abstraction layers which actually do enable programmers to work and code in an abstract machine disconnected in a logical sense from the materiality of the underlying silicon. Indeed, without this abstraction within the space of digital computers there could be none of the complex computational systems and applications that are built today on abstraction layers. Here space is deployed both in a material sense as the shared memory abstracted across both memory chips and the hard disk (which itself may be memory chips) and as a metaphor for the way in which the space of computation is produced through complex system structures that enable programmers to work as programmers working within a notionally two-dimensional address space that is abstracted onto a multidimensional structure.

The Digital Iceberg (Berry 2014)

In any case, whilst our attention is distracted by his assertion, Latour moves to cement his switch by making the entirely reasonable claim that the digital lies within a socio-technical environment, and that the way to study the digital is therefore to identify what is observable of the digital. This he claims are “segments of trajectories through distributed sets of material practice only some of which are made visible through digital traces”, thus he claims the digital is digital less as a domain and more as a set of practices. This approach to studying the digital is, of course, completely acceptable, providing one is cognisant of the way in which the digital in our post-digital world resembles the structure of an iceberg, with only a small part ever visible to everyday life – even to empirical researchers (see diagram above).  Otherwise, ethnographic approaches which a priori declare the abstractness of the digital as a research environment illegitimate, lose the very specificity of the digital that their well-meaning attempt to capture the materiality of the digital calls for. Indeed, the way in which the digital through complex processes of abstraction is then able to provide mediators to and interfaces over the material is one of the key research questions to be unpacked when attempting to get a handle on the increasing proliferation of the digital into “real” spaces. As such, ethnographic approaches will only ever be part of a set of research approaches for the study of the digital, rather than, as Latour claims, the only, or certainly most important research methodology.

This is significant because as the research agenda of the digital is heightened, in part due to financial pressures and research grants deployed to engage with digital systems, but also the now manifest presence of the digital in all aspects of life, and hence the deployment of methodological and theoretical positions on how such phenomena should be studied. Should one undertake digital humanities or computational social science? Digital sociology or some other approach such as actor-network theory? In his claim that “the more thinking and interpreting becomes traceable, the more humanities could merge with other disciplines” reveals the normative line of reasoning that (digital) humanities specificity as a research field could be usurped or supplemented by approaches that Latour himself thinks are better at capturing the digital (Latour 2014). Indeed, Latour claims in his book, Modes of Existence, that his project, AIME, “is part of the development of something known by the still- vague term ‘digital humanities,’ whose evolving style is beginning to supplement the more conventional styles of the social sciences and philosophy” (Latour 2013: xx).

To legitimate the claim of Latour’s flavour of actor-network theory as a research approach to the digital, he refers to Boullier’s (2014) work, Pour des sciences social de çéme génération, that there have been three ages of social context, with the latest emerging from the rise of digital technologies and the capture of digital traces they make possible. They are,

Age 1: Statistics and the idea of society 

Age 2: Polls and the idea of opinion 

Age 3: Digital traces and the idea of vibrations (quoted in Latour 2014).

Here, vibration follows from the work of Gabriel Tarde in 1903 who referred to the notion of “vibration” in connection to an empirical social science of data collection, arguing that,

If Statistics continues to progress as it has done for several years, if the in-formation which it gives us continues to gain in accuracy, in dispatch, in bulk, and in regularity, a time may come when upon the accomplishment of every social event a figure will at once issue forth automatically, so to speak, to takeits place on the statistical registers that will be continuously communicatedto the public and spread abroad pictorially by the daily press. Then, at every step, at every glance cast upon poster or newspaper, we shall be assailed, asit were, with statistical facts, with precise and condensed knowledge of allthe peculiarities of actual social conditions, of commercial gains or losses, of the rise or falling off of certain political parties, of the progress or decay of a certain doctrine, etc., in exactly the same way as we are assailed when weopen our eyes by the vibrations of the ether which tell us of the approach or withdrawal of such and such a so-called body and of many other things of a similar nature (Tarde 1962: 167–8).

This is the notion of vibration Latour deploys, although he prefers the notion of sublata (similar to capta, or captured data) rather than vibration. For Latour, the datascape is that which is captured by the digital and this digitality allows us to view a few segments, thus partially making visible the connections and communications of the social, understood as an actor-network. It is key here to note the focus on the visibility of the representation made possible by the digital, which becomes not a processual computational infrastructure but rather a set of inscriptions which can be collected by the keen-eyed ethnographer to help reassemble the complex socio-technical environments that the digital forms a part of. The digital is, then, a text within which are written the traces of complex social interactions between actants in a network, but only ever a repository of some of these traces.

Latour finishes his talk by reminding us that the “digital is not a domain, but a single entry into the materiality of interpreting complex data (sublata) within a collective of fellow co-inquirers”. Reiterating his point about the downgraded status of the digital as a problematic within social research and its pacification through its articulation as an inscription technology (similar to books) rather than a machinery in and of itself, shows us again, I think, that Latour’s understanding of the digital is correspondingly weak.

The use of the digital in such a desiccated form points to the limitations of Latour’s ability to engage with the research programme of investigating the digital but also the way in which a theologically derived close-reading method derived from bookish practice may not be entirely appropriate for unpacking and “reading” computational media and software structures.[3] It is not that the digital does not leave traces, as patently it does, rather it is that these traces are encoded in such a form, at such quantities and high-resolutions of data compression that in many cases human attempts to read this information inscription directly are fruitless, and instead require the mediation of software, and hence a double-hermeneutic which places human researchers twice (or more) removed from the inscriptions they wish to examine and read.  This is not to deny the materiality of the digital, or of computation itself, but certainly makes the study of such matter and practices much more difficult than the claims to visibility that Latour presents. It also suggests that Latour’s rejection of the abstraction in and of computation that electronic circuitry makes possible is highly problematic and ultimately flawed.

Notes

[1] Accepting the well-designed look of the website that contains the AIME project, there can be no disputing the fact that the user experience is shockingly bad. Not only is the layout of the web version of the book completely unintuitive but the process of finding information is clumsy and annoying to use. One can detect the faint glimmer of a network ontology guiding the design of the website, an ontology that has been forced onto the usage of the text rather than organically emerging from use, indeed the philosophical inquiry appears to have influenced the design in unproductive ways. Latour himself notes: “although I have learned from studying technological projects that innovating on all fronts at once is a recipe for failure, here we are determined to explore innovations in method, concept, style, and content simultaneously” (Latour 2013: xx). I have to say that unfortunately I do think that there is something rather odd about the interface that means that the recipe has been unsuccessful. In any case, it is faster and easier to negotiate the book via a PDF file than through the web interface, or certainly it is better to keep ready to hand the PDF or the paper copy when waiting for the website to slowly grind back into life. 
[2] See also, Latour stating: “the digital only adds a little speed to [connectivity]. But that is small compared to talks, prints or writing. The difficulty with computer development is to respect the little innovation there is, without making too much out of it. We add a little spirit to this thing when we use words like universal, unmediated or global. But if way say that, in order to make visible a collective of 5 to 10 billion people, in the long history of immutable mobiles, the byte conversion is adding a little speed, which favours certain connections more than others, than this seems a reasonable statement” (Lovink and Schultz 1997).
[3] The irony of Latour (2014) revealing the close reading practices of actor-network theory as a replacement for the close reading practices of the humanities/digital humanities is interesting (see Berry 2011). Particularly in relation to his continual reference to the question of distant reading within the digital humanities and his admission that actor-network theory offers little by way of distant reading methods. Latour (2010b) explains “under André Malet’s guidance, I discovered biblical exegesis, which had the effect of forcing me to renew my Catholic training, but, more importantly, which put me for the first time in contact with what came to be called a network of translations – something that was to have decisive influence on my thinking… Hence, my fascination for the literary aspects of science, for the visualizing tools, for the collective work of interpretation around barely distinguishable traces, for what I called inscriptions. Here too, exactly as in the work of biblical exegesis, truth could be obtained not by decreasing the number of intermediary steps, but by increasing the number of mediations” (Latour 2010b: 600-601, emphasis removed).

Bibliography

Berry, D. M. (2011) Understanding Digital Humanities, Basingstoke: Palgrave Macmillan.

Cantwell Smith, B. (2003). Digital Abstraction and Concrete Reality. In Impressiones, Calcografia Nacional, Madrid.

Latour, B. (2010a) The migration of the aura or how to explore the original through its fac similes, in Bartscherer, T. (ed.) Switching Codes, University of Chicago Press.

Latour, B. (2010b) Coming out as a philosopher, Social Studies of Science, 40(4) 599–608.

Latour, B (2013) An inquiry into modes of existence : an anthropology of the moderns, Harvard University Press.

Latour, B. (2014) Opening Plenary, Digital Humanities 2014 (DH2014), available from http://dh2014.org/videos/opening-night-bruno-latour/

Lovink, G. and Schultz, P. (1997) There is no information, only transformation: An Interview with Bruno Latour, available from http://thing.desk.nl/bilwet/Geert/Workspace/LATOUR.INT

Tarde, G. (1903/1962) The Laws of Imitation, New York, Henry Holt and Company

On Capture

In thinking about the conditions of possibility that make possible the mediated landscape of the post-digital (Berry 2014) it is useful to explore concepts around capture and captivation, particularly as articulated by Rey Chow (2012). Chow argues the being “captivated” is

the sense of being lured and held by an unusual person, event, or spectacle. To be captivated is to be captured by means other than the purely physical, with an effect that is, nonetheless, lived and felt as embodied captivity. The French word captation, referring to a process of deception and inveiglement [or persuade (someone) to do something by means of deception or flattery] by artful means, is suggestive insofar as it pinpoints the elusive yet vital connection between art and the state of being captivated. But the English word “captivation” seems more felicitous, not least because it is semantically suspended between an aggressive move and an affective state, and carries within it the force of the trap in both active and reactive senses, without their being organised necessarily in a hierarchical fashion and collapsed into a single discursive plane (Chow 2012: 48). 

To think about capture then is to think about the mediatized image in relation to reflexivity. For Chow, Walter Benjamin inaugurated a major change in the the conventional logic of capture, from a notion of reality being caught or contained in the copy-image, such as in a repository, the copy-image becomes mobile and this mobility adds to its versatility. The copy-image then supersedes or replaces the original as the main focus, as such this logic of the mechanical reproduction of images undermines hierarchy and introduces a notion of the image as infinitely replicable and extendable.  Thus the “machinic act or event of capture” creates the possibility for further dividing and partitioning, that is for the generation of copies and images, and sets in motion the conditions of possibility of a reality that is structured around the copy.

Chow contrasts capture to the modern notion of “visibility” such that as Foucault argues “full lighting and the eyes of a supervisor capture better than darkness, which ultimately protected. Visibility is a trap” (Foucault 1991: 200). Thus in what might be thought of as the post-digital – a term that Chow doesn’t use but which I think is helpful in thinking about this contrast – what is at stake is no longer this link between visibility and surveillance, indeed nor is the link between becoming-mobile and the technology of images, but rather the collapse of the “time lag” between the world and its capture.

This is when time loses its potential to “become fugitive” or “fossilised” and hence to be anachronistic. The key point being that the very possibility of memory is disrupted when images become instantaneous and therefore synonymous with an actual happening. Thus in a condition of the post-digitial, whereby digital technologies make possible not only the instant capture and replication of an event, but also the very definition of the experience through its mediation both at the moment of capture – such as with the waving smart phones at a music concert or event  – but also in the subsequent recollection and reflection on that experience.

Thus the moment of capture or “arrest” is an event of enclosure, locating and making possible the sharing and distribution of a moment through infinite reproduction and dissemination. So capture represents a techno-social moment but is also discursive in that it is a type of discourse that is derived from the imposition of power on bodies and the attachment of bodies to power. This Chow calls a heteronomy or heteropoiesis, as in a system or artefact designed by humans, with some purpose, but not able to self-reproduce but which is yet able to exert agency in the form of prescription often back onto its designers. Essentially producing an externality in relation to the application of certain “laws” or regulations.

Nonetheless, capture and captivation also constitute a critical response through the possibility of a disconnecting logic and the dynamics of mimesis. This possibility reflected through the notion of entanglements refers to the “derangements in the organisation of knowledge caused by unprecedented adjacency and comparability or parity”. This is, of course, definitional in relation to the notion of computation when itself works through a logic of formatting, configuration, structuring and the application of computational ontologies (Berry 2011, 2014).

Here capture offers the possibility of a form of practice in relation to alienation by making the inquirer adopt a position of criticism, the art of making strange. Chow here is making links to Brecht and Shklovsky, and in particular their respective predilection for estrangement in artistic practice, such as in Brecht’s notion of verfremdung, and thus to show how things work, whilst they are being shown (Chow 2012: 26-28). In this moment of alienation the possibility is thus raised of things being otherwise. This is the art of making strange as a means to disrupt the everyday conventionalism and refresh the perception of the world – art as device. The connections between techniques of capture and critical practice as advocated by Chow, and reading or writing the digital are suggestive in relation to computation more generally, not only in artistic practice but also in terms of critical theory. Indeed, capture could be a useful hinge around which to subject the softwarization practices, infrastructures and experiences of computation to critical thought both in terms of their technical and social operations but also to the extent to which they generate a coercive imperative for humans to live and stay alive under the conditions of a biocomputational regime.

Bibliography

Berry, D. M. (2011) The Philosophy of Software, London: Palgrave.

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury.

Chow, R. (2012) Entanglements, or Transmedial Thinking about Capture, London: Duke University Press.

Foucault, M. (1991) Discipline and Punish, London: Penguin Social Sciences.

Digital/Post-digital

I want to take up the question of the definition of the “post-digital” again because I think that what the post-digital is pointing towards as a concept is the multiple moments in which the digital was operative in various ways (see Berry 2014a, 2014b, 2014c). Indeed, historicising the “digital” can be a useful, if not crucial step, in understanding the transformation(s) of digital technologies. That is, we are at a moment whereby we are able to survey the various constellations of factors that made up a particular historical configuration around the digital and in which the “digital” formed an “imagined” medium to which existing analogue mediums where often compared, and to which the digital tended to be seen as suffering from a lack, e.g. not a medium for “real” news, for film, etc. etc. The digital was another medium to place at the end (of the list) after all the other mediums were counted – and not a very good one. It was where the digital was understood, if it were understood at all, as a complement to other media forms, somewhat lacking, geeky, glitchy, poor quality and generally suited for toys, like games or the web, or for “boring” activities like accountancy or infrastructure. The reality is that in many ways the digital was merely a staging post, whilst computing capacity, memory, storage and display resolutions could fall in price/rise in power enough to enable a truly “post-digital” environment that could produce new mediated experiences. That is, that it appears that the digital was “complementary” but the post-digital is zero-sum. Here is my attempt to sum up some of the moments that I think might serve as a provocation to debate the post-digital.


 DIGITAL 


 POST-DIGITAL 
Non-zero sum Zero-sum
Objects Streams
Files Clouds
Programs Apps
SQL databases NoSQL storage
HTML node.js/APIs
Disciplinary Control
Administration Logistics
Connect Always-on
Copy/Paste Intermediate
Digital Computal
Hybrid Unified
Interface Surface
BitTorrent Scraping
Participation Sharing/Making
Metadata Metacontent
Web 2.0 Stacks
Medium Platform
Games World
Software agents Compactants
Experience Engagement
Syndication Push notification
GPS Beacons  (IoTs)
Art Aesthetics
Privacy Personal Cloud
Plaintext Cryptography
Responsive Anticipatory
Tracing Tracking
Surfing Reading

figure 1: Digital to Post-Digital Shifts 
This the table offers constellations or moments within a “digital” as opposed to a “post-digital” ecology, as it were, and, of course, a provocation to thought. But they can also be thought of as ideal types that can provide some conceptual stability for thinking, in an environment of accelerating technical change and dramatic and unpredictable social tensions in response to this. The question then becomes to what extent can the post-digital counter-act the tendencies towards domination of specific modes of thought in relation to instrumentality, particularly manifested in computational devices and systems? For example, the contrast between the moments represented by Web 2.0 / Stacks provides an opportunity for thinking about how new platforms have been built on the older Web 2.0 systems, in some cases replacing them, and in others opening up new possibilities which Tiziana Terranova (2014) has pointed to in her intriguing notion of “Red Stacks”, for example (and in contrast to Bruce Sterlings notion of “The Stacks”, e.g. Google, Facebook, etc.). Here I have been thinking of the notion of the digital as representing a form of “weak computation/computationality”, versus the post-digital as “strong computation/computationality”, and what would the consequences be for a society that increasingly finds that the weak computational forms (CDs, DVDs, laptops, desktops, Blogs, RSS, Android Open Source Platform [AOSP], open platforms and systems, etc.) are replaced by stronger, encrypted and/or locked-in versions (FairPlay DRM, Advanced Access Content System [AACS], iPads, Twitter, Push-notification, Google Mobile Services [GMS], Trackers, Sensors, ANTICRISIS GIRL, etc.)?  

These are not just meant to be thought of in a technical register, rather the notion of “weak computation” points towards a “weak computational sociality” and “strong computation” points towards a “strong computation sociality”, highlighting the deeper penetration of computational forms into everyday life within social media and push-notification, for example. Even as the post-digital opens up new possibilities for contestation, e.g. megaleaks, data journalism, hacks, cryptography, dark nets, torrents, piratization, sub rosa sharing networks, such as the Alexandria Project, etc. and new opportunities for creating, sharing and reading knowledges, the “strong computation” of the post-digital always already suggests the shadow of computation reflected in heightened tracking, surveillance and monitoring of a control society. The post-digital points towards a reconfiguration of publishing away from the (barely) digital techniques of the older book publishing industry, and towards the post-digital singularity of Amazonized publishing with its accelerated instrumentalised forms of softwarized logistics whilst also simultaneously supporting new forms of post-digital craft production of books and journals, and providing globalised distribution. How then can we think about these contradictions in the unfolding of the post-digital and its tendencies towards what I am calling here “strong computation”, and in what way, even counter-intuitively, does the digital (weak computation) offer alternatives, even as marginal critical practice, and the post-digital (strong computation) create new critical practices (e.g. critical engineering), against the increasing interconnection, intermediation and seamless functioning and operation of the post-digital as pure instrumentality, horizon, and/or imaginary.  



Bibliography

Berry, D. M. (2014a) The Post-Digital, Stunlaw, accessed 14/1/2014, http://stunlaw.blogspot.co.uk/2014/01/the-post-digital.html

Berry, D. M. (2014b) Critical Theory and the Digital, New York: Bloomsbury.

Berry, D. M. (2014c) On Compute, Stunlaw, accessed 14/1/2014,  http://stunlaw.blogspot.co.uk/2014/01/on-compute.html

Terranova, T. (2014) Red stack attack! Algorithms, capital and the automation of the common, EuroNomade, accessed 20/2/2014,  http://www.euronomade.info/?p=1708


Advertisements