Category Archives: computer science

On Latour’s Notion of the Digital

Bruno Latour at Digital Humanities 2014

Bruno Latour, professor at Sciences Po and director of the TARDE program (Theory of Actor-network and Research in Digital Environments), recently outlined his understanding of the digital in an interesting part of his plenary lecture at Digital Humanities 2014 conference. He was honest in accepting that his understanding may itself be a product of his own individuation and pre-digital training as a scholar which emphasised close-reading techniques and agonistic engagement around a shared text (Latour 2014). Nonetheless, in presenting his attempt to produce a system of what we might call augmented close-reading in the AIME system, he was also revealing about how the digital was being deployed methodologically and his notion of the digital’s ontological constitution.[1]

Unsurprisingly, Latour’s first move was to deny the specificity of the digital as a separate domain as such, highlighting both the materiality of the digital and its complex relationship with the analogue. He described both the analogue structures that underpin the digital processing that makes the digital possible at all (the materials, the specific electrical voltage structures and signalling mechanisms, the sheer matter of it all), but also the digital’s relationship to a socio-technical environment. In other words, he swiftly moved away from what we might call the abstract materiality of the digital, its complex layering over an analogue carrier and instead reiterated the conditions under which the existing methodological approach of actor-network theory was justified – i.e. digital forms part of a network, is “physical” and material, requires a socio-techical environment to function, is a “complex function”, and so on.

Slide drawn from Latour (2014)

It would be too strong, perhaps, to state that Latour denied the specificity of the digital as such, but rather through what we might unkindly call a sophisticated technique of bait and switch and the use of a convincingly deployed visualisation of what the digital “really” is, courtesy of an image drawn from Cantwell-Smith (2003) the digital as not-physical was considered to have been refuted. Indeed, this approach to the digital echoes his earlier statements from 1997 about the digital, such that Latour argues,[2]

I do not believe that computers are abstract… there is (either) 0 and (or) 1 has absolutely no connection with the abstractness. It is actually very concrete, never 0 and 1 (at the same time)… There is only transformation. Information as something which will be carried through space and time, without  deformation, is a complete myth. People who deal with the technology will actually use the practical notion of transformation. From the same bytes, in terms of ‘abstract encoding’, the output you get is entirely different, depending on  the medium  you use. Down with information (Lovink and Schultz 1997).

This is not a new position for Latour, indeed in earlier work he has stated “actually there is nothing entirely digital in digital computers either!” (original emphasis, Latour 2010a). Whilst this may well be Latour’s polemical style getting rather out of hand, it does raise the question about what it is that is “digital” for Latour and therefore how this definition enables him to make such strong claims. One is tempted to suppose that it is the materiality of the 0 and 1s that Cantwell Smith’s diagram points towards that enables Latour to dismiss out of hand the complex abstract digitality of the computer as an environment, which although not immaterial, still is located through a complex series of abstraction layers which actually do enable programmers to work and code in an abstract machine disconnected in a logical sense from the materiality of the underlying silicon. Indeed, without this abstraction within the space of digital computers there could be none of the complex computational systems and applications that are built today on abstraction layers. Here space is deployed both in a material sense as the shared memory abstracted across both memory chips and the hard disk (which itself may be memory chips) and as a metaphor for the way in which the space of computation is produced through complex system structures that enable programmers to work as programmers working within a notionally two-dimensional address space that is abstracted onto a multidimensional structure.

The Digital Iceberg (Berry 2014)

In any case, whilst our attention is distracted by his assertion, Latour moves to cement his switch by making the entirely reasonable claim that the digital lies within a socio-technical environment, and that the way to study the digital is therefore to identify what is observable of the digital. This he claims are “segments of trajectories through distributed sets of material practice only some of which are made visible through digital traces”, thus he claims the digital is digital less as a domain and more as a set of practices. This approach to studying the digital is, of course, completely acceptable, providing one is cognisant of the way in which the digital in our post-digital world resembles the structure of an iceberg, with only a small part ever visible to everyday life – even to empirical researchers (see diagram above).  Otherwise, ethnographic approaches which a priori declare the abstractness of the digital as a research environment illegitimate, lose the very specificity of the digital that their well-meaning attempt to capture the materiality of the digital calls for. Indeed, the way in which the digital through complex processes of abstraction is then able to provide mediators to and interfaces over the material is one of the key research questions to be unpacked when attempting to get a handle on the increasing proliferation of the digital into “real” spaces. As such, ethnographic approaches will only ever be part of a set of research approaches for the study of the digital, rather than, as Latour claims, the only, or certainly most important research methodology.

This is significant because as the research agenda of the digital is heightened, in part due to financial pressures and research grants deployed to engage with digital systems, but also the now manifest presence of the digital in all aspects of life, and hence the deployment of methodological and theoretical positions on how such phenomena should be studied. Should one undertake digital humanities or computational social science? Digital sociology or some other approach such as actor-network theory? In his claim that “the more thinking and interpreting becomes traceable, the more humanities could merge with other disciplines” reveals the normative line of reasoning that (digital) humanities specificity as a research field could be usurped or supplemented by approaches that Latour himself thinks are better at capturing the digital (Latour 2014). Indeed, Latour claims in his book, Modes of Existence, that his project, AIME, “is part of the development of something known by the still- vague term ‘digital humanities,’ whose evolving style is beginning to supplement the more conventional styles of the social sciences and philosophy” (Latour 2013: xx).

To legitimate the claim of Latour’s flavour of actor-network theory as a research approach to the digital, he refers to Boullier’s (2014) work, Pour des sciences social de çéme génération, that there have been three ages of social context, with the latest emerging from the rise of digital technologies and the capture of digital traces they make possible. They are,

Age 1: Statistics and the idea of society 

Age 2: Polls and the idea of opinion 

Age 3: Digital traces and the idea of vibrations (quoted in Latour 2014).

Here, vibration follows from the work of Gabriel Tarde in 1903 who referred to the notion of “vibration” in connection to an empirical social science of data collection, arguing that,

If Statistics continues to progress as it has done for several years, if the in-formation which it gives us continues to gain in accuracy, in dispatch, in bulk, and in regularity, a time may come when upon the accomplishment of every social event a figure will at once issue forth automatically, so to speak, to takeits place on the statistical registers that will be continuously communicatedto the public and spread abroad pictorially by the daily press. Then, at every step, at every glance cast upon poster or newspaper, we shall be assailed, asit were, with statistical facts, with precise and condensed knowledge of allthe peculiarities of actual social conditions, of commercial gains or losses, of the rise or falling off of certain political parties, of the progress or decay of a certain doctrine, etc., in exactly the same way as we are assailed when weopen our eyes by the vibrations of the ether which tell us of the approach or withdrawal of such and such a so-called body and of many other things of a similar nature (Tarde 1962: 167–8).

This is the notion of vibration Latour deploys, although he prefers the notion of sublata (similar to capta, or captured data) rather than vibration. For Latour, the datascape is that which is captured by the digital and this digitality allows us to view a few segments, thus partially making visible the connections and communications of the social, understood as an actor-network. It is key here to note the focus on the visibility of the representation made possible by the digital, which becomes not a processual computational infrastructure but rather a set of inscriptions which can be collected by the keen-eyed ethnographer to help reassemble the complex socio-technical environments that the digital forms a part of. The digital is, then, a text within which are written the traces of complex social interactions between actants in a network, but only ever a repository of some of these traces.

Latour finishes his talk by reminding us that the “digital is not a domain, but a single entry into the materiality of interpreting complex data (sublata) within a collective of fellow co-inquirers”. Reiterating his point about the downgraded status of the digital as a problematic within social research and its pacification through its articulation as an inscription technology (similar to books) rather than a machinery in and of itself, shows us again, I think, that Latour’s understanding of the digital is correspondingly weak.

The use of the digital in such a desiccated form points to the limitations of Latour’s ability to engage with the research programme of investigating the digital but also the way in which a theologically derived close-reading method derived from bookish practice may not be entirely appropriate for unpacking and “reading” computational media and software structures.[3] It is not that the digital does not leave traces, as patently it does, rather it is that these traces are encoded in such a form, at such quantities and high-resolutions of data compression that in many cases human attempts to read this information inscription directly are fruitless, and instead require the mediation of software, and hence a double-hermeneutic which places human researchers twice (or more) removed from the inscriptions they wish to examine and read.  This is not to deny the materiality of the digital, or of computation itself, but certainly makes the study of such matter and practices much more difficult than the claims to visibility that Latour presents. It also suggests that Latour’s rejection of the abstraction in and of computation that electronic circuitry makes possible is highly problematic and ultimately flawed.

Notes

[1] Accepting the well-designed look of the website that contains the AIME project, there can be no disputing the fact that the user experience is shockingly bad. Not only is the layout of the web version of the book completely unintuitive but the process of finding information is clumsy and annoying to use. One can detect the faint glimmer of a network ontology guiding the design of the website, an ontology that has been forced onto the usage of the text rather than organically emerging from use, indeed the philosophical inquiry appears to have influenced the design in unproductive ways. Latour himself notes: “although I have learned from studying technological projects that innovating on all fronts at once is a recipe for failure, here we are determined to explore innovations in method, concept, style, and content simultaneously” (Latour 2013: xx). I have to say that unfortunately I do think that there is something rather odd about the interface that means that the recipe has been unsuccessful. In any case, it is faster and easier to negotiate the book via a PDF file than through the web interface, or certainly it is better to keep ready to hand the PDF or the paper copy when waiting for the website to slowly grind back into life. 
[2] See also, Latour stating: “the digital only adds a little speed to [connectivity]. But that is small compared to talks, prints or writing. The difficulty with computer development is to respect the little innovation there is, without making too much out of it. We add a little spirit to this thing when we use words like universal, unmediated or global. But if way say that, in order to make visible a collective of 5 to 10 billion people, in the long history of immutable mobiles, the byte conversion is adding a little speed, which favours certain connections more than others, than this seems a reasonable statement” (Lovink and Schultz 1997).
[3] The irony of Latour (2014) revealing the close reading practices of actor-network theory as a replacement for the close reading practices of the humanities/digital humanities is interesting (see Berry 2011). Particularly in relation to his continual reference to the question of distant reading within the digital humanities and his admission that actor-network theory offers little by way of distant reading methods. Latour (2010b) explains “under André Malet’s guidance, I discovered biblical exegesis, which had the effect of forcing me to renew my Catholic training, but, more importantly, which put me for the first time in contact with what came to be called a network of translations – something that was to have decisive influence on my thinking… Hence, my fascination for the literary aspects of science, for the visualizing tools, for the collective work of interpretation around barely distinguishable traces, for what I called inscriptions. Here too, exactly as in the work of biblical exegesis, truth could be obtained not by decreasing the number of intermediary steps, but by increasing the number of mediations” (Latour 2010b: 600-601, emphasis removed).

Bibliography

Berry, D. M. (2011) Understanding Digital Humanities, Basingstoke: Palgrave Macmillan.

Cantwell Smith, B. (2003). Digital Abstraction and Concrete Reality. In Impressiones, Calcografia Nacional, Madrid.

Latour, B. (2010a) The migration of the aura or how to explore the original through its fac similes, in Bartscherer, T. (ed.) Switching Codes, University of Chicago Press.

Latour, B. (2010b) Coming out as a philosopher, Social Studies of Science, 40(4) 599–608.

Latour, B (2013) An inquiry into modes of existence : an anthropology of the moderns, Harvard University Press.

Latour, B. (2014) Opening Plenary, Digital Humanities 2014 (DH2014), available from http://dh2014.org/videos/opening-night-bruno-latour/

Lovink, G. and Schultz, P. (1997) There is no information, only transformation: An Interview with Bruno Latour, available from http://thing.desk.nl/bilwet/Geert/Workspace/LATOUR.INT

Tarde, G. (1903/1962) The Laws of Imitation, New York, Henry Holt and Company

Advertisements

Softwarization: A Tentative Genealogy


I was interested in seeing how the term “softwarization” had been used previously, especially considering I use it in my own work, and was rather surprised to find usage dating back to 1969 (Modern Data 1969). So far this is the earliest usage I have been able to uncover but it is interesting to note how similar the usage of the concept has been by listing a few quotations from the extant literature. By no means meant to be exhaustive it does demonstrate that the notion of “softwarization” has been around almost as long as the concept of software itself.

  • Much more modest in scope, my book present episodes from the history of “softwarization”… of culture between 1960 and 2010, with a particular attention to media software – from the original ideas which led to its development to its current ubiquity (Manovich 2013: 5). 
  • Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. It is a potential that is understood as relating to the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate, often theorised as a form of network politics (Berry 2012). 
  • In my view, this ability to combine previously separate media techniques represents a fundamentally new stage in the history of human media, human semiosis, and human communication, enabled by its “softwarization” (Manovich 2008: 29). 
  • Sure, it would be a very good idea, and if you watch and see what happens in the 21st century you’ll see more and more manufacturers deciding to do precisely that, because of the value of empowered user innovation, which will drive down their costs of making new and better products all the time. Indeed for reasons which are as obvious to manufacturers as they are to us, the softwarization of hardware in the 21st century is good for everybody (Moglen 2004).
  • The history of modern production is intimately tied to the automation of business processes. First, companies used steam engines, then conveyor belts, and today we use information systems, and especially software, to automate business activities. We might call it “softwarization” (Schlueter Langdon 2003). 
  • The lightning-fast development of new software is producing technologies and applications ”that we couldn’t even envision 10 years ago,” [W. Brian] Arthur contended, ”redefining whole industries” and creating new ones. Virtually every industry will be affected, he says. Just as the industrial revolution uprooted many blue-collar jobs, today’s ”softwarization” will displace many white-collar workers (Arthur, quoted in Pine 1997)
  • We suggest an expression “softwarization” to describe a general trend, in which “software” such as knowledge and services is given a relatively higher appraisal than “hardware” such as goods and resources (Shingikai 1983: 74). 
  • However, because of the increasing “softwarization” of the industry, this is no longer sufficient. “The time is not far,” Mr. Jequier said, “when computer usage will become part of the normal school and university curriculum” (Modern Data 1969: 32)

Bibliography

Berry, D. M. (2012) Life in Code and Software/Introduction, Life in Code and Software, accessed 26/08/2013, http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software/Introduction

Manovich, L. (2008) Software Takes Command, draft, accessed 26/08/2013, http://black2.fri.uni-lj.si/humbug/files/doktorat-vaupotic/zotero/storage/D22GEWS3/manovich_softbook_11_20_2008.pdf

Manovich, L. (2013) Software Takes Command, London: Bloomsbury.

Modern Data (1969) International News, Modern Data, Volume 2, Issue 8.

Moglen, E. (2004) Eben Moglen’s Harvard Speech – The Transcript, Groklaw, accessed 26/08/2013, http://stevereads.com/cache/eben_moglen_jolt_speech.html

Pine, A. (1997) America’s Economic Future? Happy Days Could Be Here Again, accessed 26/08/2013, Los Angeles Times, http://articles.orlandosentinel.com/1997-06-22/news/9706230181_1_economists-inflation-oil-embargo/2

Schlueter Langdon, C. 2003. Does IT Matter? An HBR Debate–Letter from Chris Schlueter Langdon. Harvard Business Review (June): 16, accessed 26/08/2013, http://www.ebizstrategy.org/research/HBRLetter/HBRletter.htm and http://www.simoes.com.br/mba/material/ebusiness/ITDOESNTMATTER.pdf

Shingikai, K. (1983) Japan in the Year 2000: Preparing Japan for an Age of Internationalization, the Aging Society and Maturity, Japan: Japan Times, Limited.

Phenomenological Approaches to the Computal: Some Reflections on Computation

3D Haptic Technology, University of Hull 2013

Computation is transforming the way in which knowledge is created, used, shared and understood, and in doing so changing the relationship between knowledge and freedom. It encourages us to ask questions about philosophy in a computational age and its relationship to the mode of production that acts as a condition of possibility for it. Today’s media are softwarized which imposes certain logics, structures and hierarchies of knowledge onto the processes of production and consumption. This is also becoming more evident with the advent of digital systems, smart algorithms, and real-time streaming media. We could therefore argue that the long predicted convergence of communications and computers, originally identified as “compunication” (see Oettinger and  Legates 1977; Lawrence 1983), has now fully arrived. The softwarised media leads us to consider how mediation is experienced through these algorithmic systems and the challenges for a phenomenology of the computal.

Contribution to Keynote Symposium at Conditions of Mediation ICA Pre-conference on 17th June 2013, Birkbeck, University of London. 

Exhaustive Media


This is the edited text of a talk given by David M. Berry at Transmediale 2013 at the Depletion Design panel. 

Today there is constant talk about the “fact” that we (or our societies and economies) are exhausted, depleted or in a state of decay (see Wiedemann and Zehle 2012).  This notion of decline is a common theme in Western society, for example Spengler’s The Decline of the West, and is articulated variously as a decline in morals, lack of respect for law, failing economic or military firepower, relative education levels, threats from ecological crisis, and so on. Talk of decline can spur societies into paroxysms of panic, self-criticism and calls for urgent political intervention. That is not to say that claims to relative decline are never real as such, indeed relative measures inevitably shift in respect to generation ageing and change, particularly in relation to other nations. However, the specific decline discourse is interesting for what it reveals about the concerns and interests of each population, and a particular generation within it, whether it be concerns with health, wealth, or intellectual ability, and so forth.

Karl Mannheim

The issue of a tendency inherent in a temporal social location, that is the certain definite modes of behaviour, feeling and thought in what we might call constantly repeated experience in a common location in the historical dimension of the social process is what Karl Mannheim called the generation entelechy (Mannheim 1952). This is the process of living in a changed world and is a re-evaluation of the inventory and the forgetting of that which is useful and covert and which is not yet won. In other words, the particular stratification of experience in relation to the the historical contexts of a specific generation – in both, what we might call, the inner and the outer dimensions of experience. This social process also naturally causes friction between different generation entelechies, such as that between an older and younger generation – there may also be moments of conflict within a generation entelechy or what Mannheim called generation units, although there is not space here to develop this question in relation to the digital.

The relative conditions of possibility, particularly in relation to what we might call the technical milieu for a generation entelechy, contribute towards slower or faster cultural, social, and economic change. The quicker the pace of social, technical, economic and cultural change is, the greater the likelihood that a particular generation location group will react to the changed situations by producing their own entelechy. Thus, individual and group experiences act as crystallising agents in this process, and plays out in notions of “being young”, “freshness”, “cool”, or “with it” in some sense, which acts to position generation entelechies in relation to each other both historically and culturally.

Mann identifies crucial features of a generation entelechy as (1) new participants in the cultural process are emerging, whilst (2) former participants in that process are continually disappearing; (3) members of one gneration can participate only in a temporally limited section of the historical process; and (4) it is therefore necessary to continually transmit the accumulated cultural heritage; (5) the transition from generation to generation is a continuous process (Mannheim 1952).

In relation to this general talk of depletion in Europe and the US, one of the recent decline-issues has been, particularly in the US and UK context, the worry about the decline of computational ability of young generations. More specifically the lack of digital literacy (or what I call elsewhere iteracy) of the new generations. In this specific discourse, the worry is articulated that a new generation is emerging that is not adequately prepared for what appears to be a deeply computational economic and cultural environment. This is usually, although not always, linked to a literal exhaustion of the new generation, the implication being a generation that is unprepared, apathetic, illiterate and/or disconnected. Often these claims are located within what Mannheim calls the “Intelligentsia”, he writes, “in every society there are social groups whose special task it is to provide an interpretation of the world for that society. We call these the “Intelligentsia” (Mannheim 1967: 9). It is no surprise, then, that in the instance of digital literacy we see the same strata across society, commenting on and debating the relative merits of computational competences, abilities and literacies at a number of different levels, but particularly in relation to the education of new generations through discussions of school, college and university digital literacies.

Some of these claims are necessarily the result of a form of generational transference of the older generations’ own worries concerning its inadequacies, in this case usually either (1) an inability to use the correct type of computational devices/systems; (2) a concern that the young are not using the computers in the correct manner that they themselves were taught, for example using a physical keyboard and mouse; or (3) a dismissal of the new forms of digitality that are seen as trivial, wasteful of time, and hence socially or economically unproductive, a classic example of this is social media. There are a number of themes and levels of analysis that are brought out in these discussions, often, but not limited to the question of moral failings of the new generation, but also to the technical abilities, economic possibilities, such as vocationalism, but also the ways of thinking appropriate to a perceived new environment or economic and technical ecology. This is similar to Foucault’s question of a generational ethos, as it were, and whether it might be helpful if we,

envisage modernity rather as an attitude than as a period of history. And by ‘attitude,’ I mean a mode of relating to contemporary reality; a voluntary choice made by certain people; in the end, a way of thinking and feeling; a way, too, of acting and behaving that at one and the same time marks a relation of belonging and presents itself as a task. A bit, no doubt, like what the Greeks called an ethos (Foucault 1984: 39). 

Immanuel Kant

Here, though, I want to take the problem of exhaustion of the new generations as a focus, the “exhausted” literally, as in terms of the Latin exhaust as “drained out”. In other words to ask the why, how, who, what is “drained out” and where to, in our highly computational cultures? That is, to turn the question around and identify exhaustion of the new generations as indeed an important site of concern, but that the exhaustion that the new generations are experiencing is not an apathy or lack of energy, but rather a product of the political economy, an ethos that results from being subject to the digital draining data, information and energy into technical systems through specific drainage points, operating through and on computational devices, and particularly intimate technologies like mobile phones, tablets and laptops. This is to focus on the extent to which digital media are increasingly becoming exhaustive media, and critically interrogate their function, form and content.

To put it another way, what would be the “enlightenment” in relation to the new exhaustive media and the software ecologies of trackers, web-bugs, beacons, apps, clouds and streams? If we are to agree with Kant that the enlightenment is the universal, free, public uses of reason (Kant 1991), how do we assure freedom of public reason in the digital age? As Foucault described, for Kant,

when one is reasoning only in order to use one’s reason, when one is reasoning as a reasonable being (and not as a cog in a machine), when one is reasoning as a member of reasonable humanity, then the use of reason must be free and public. Enlightenment is thus not merely the process by which individuals would see their own personal freedom of thought guaranteed. There is Enlightenment when the universal, the free, and the public uses of reason are superimposed on one another (Foucault 1984: 36-37).

Thus for Kant, to reach our political maturity as human beings we should “dare to know” or sapere aude, that is, “to have courage to use your own reasoning” (Kant 1991: 54). That is the challenge is for us to rise to the challenge issued by Foucault to think in terms of the ‘historical ontology of ourselves’. Which enables us to further test contemporary reality to find “change points”, and what might the implications be for an investigation of events by which we constitute ourselves as subjects? Indeed, Foucault further argues,

Michel Foucault

I do not know whether we will ever reach mature adulthood. Many things in our experience convince us that the historical event of the Enlightenment did not make us mature adults, and we have not reached that stage yet. However, it seems to me that a meaning can be attributed to that critical interrogation on the present and on ourselves which Kant formulated by reflecting on the Enlightenment. It seems to me that Kant’s reflection is even a way of philosophizing that has not been without its importance or effectiveness during the last two centuries. The critical ontology of ourselves has to be considered not, certainly, as a theory, a doctrine, nor even as a permanent body of knowledge that is accumulating; it has to be conceived as an attitude, an ethos, a philosophical life in which the critique of what we are is at one and the same time the historical analysis of the limits that are imposed on us and an experiment with the possibility of going beyond them (Foucault 1984: 49).

One way forward might be to begin to map the exhaustion of a new generation entelechy in terms of a new political economy that is emerging in terms of the ability to exhaust us of our thoughts, movements, health, thoughts, life, practices, etc. That is, usefully captured in terms of the term of the art in technical circles of the “data exhaust” that all user of computational systems create. We might therefore think in terms of the computational imaginaries that are crystallised within particular generation entelechies – and how we might gather a critical purchase on them. In other words the generation entelechy connected to a particular computational Weltanschauung, or worldview – what I call computationality elsewhere (Berry 2011).

This is to move away from a concern with mere competences of a new generation entelechy and widen the focus on the critical and reflexive abilities of a new generation and how they might be actuated. That is, rather than teach computer programming as a skill for a new economy, instead explore the historical, philosophical, theoretical and critical context for particular kinds of the various forms of digital making. One way of doing this might be to look at concrete case studies of actual programming sites and projects, in order to understand why and how these forms of activity are related, and the context in which they have developed and their trajectories, a research project that has recently begun to be closely associated with critical strands in software studies, for example (Berry 2011).

This is a critical means of contributing to the importance of the project of making the invisibility of much of the digital infrastructures become visible and available to critique. Of course, understanding digital technology is a “hard” problem for the humanities, liberal arts and social sciences due to the extremely complex forms which contain agentic functions and normative (but often hidden) values. Indeed, we might contemplate the curious problem that as the digital increasingly structures the contemporary world, curiously, it also withdraws, becomes backgrounded (Berry 2011). This enables us to explore how knowledge is transformed when mediated through code and software and apply critical approaches to big data, visualisation, digital methods, digital humanities, and so forth. But crucially to also see this in relation to the crystallisation of new entelechies around digital technologies.

Thinking about knowledge in this way enables us to explore generational epistemological changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks, what Mitcham calls a ‘new ecology of artifice’ (Mitcham 1998: 43). Indeed, the proliferation of contrivances that are computationally based is truly breathtaking, and each year we are given statistics that demonstrate how profound the new computational world is. For example, in 2012, 427 million Europeans (or 65 percent) use the internet and more than 90% of European internet users read news online (Wauters 2012). These computational devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects, and usage have to be subject to the kind of critical reasoning that both Kant and Foucault called for.

This is nonetheless made much more difficult by both the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful, and less power-hungry, and by the increase in complexity, power, range, and intelligence of the software that powers them. Of course, we should also be attentive to the over-sharing or excessive and often invisible collection of data within these device ecologies that are outside of the control of the user to ‘redact themselves’, as represented by the recent revelation of the “Path” and “Hipster” apps that were automatically harvesting user address book data on mobile phones (BBC 2012).

We might consider these transformations in light of what Eric Schmitt, ex-CEO of Google called “augmented humanity”. He described this as a number of movements within the capabilities of contemporary computational systems, such that at Google, “we know roughly who you are, roughly what you care about, roughly who your friends are…Google also knows, to within a foot, roughly where you are… I actually think most people don’t want Google to answer their questions… They want Google to tell them what they should be doing next” (Eaton 2010). Translated this means that Google believes that it knows better than the user what it is that they should be doing, and in many cases even thinking. Thus, the computational device the user holds contains the means to source the expertise to prescribe action in particular contexts, what we might call “context consumerism”. That is, that the user is able to purchase their cognitive/memory/expertise capabilities as required on-the-fly. Thus, humanity becomes what we might call, following the development of programming languages such as C++, a new augmented or extended humanity++. Indeed there are now a number of examples of these developments in relation to, for example, Google Glass, contextual UX, locative technologies, etc.

Bernard Stiegler
We might consider the entire technical and media industries in light of what Stiegler (2010) has called the “Programming Industries” which are involved in  creating institutionalized “context”. This is data collected from the tacit knowledge of users and their “data exhaust” and delegated to computer code/software. These algorithms then create “applied knowledge” and are capable of making “judgments” in specific use cases. Indeed, today people rarely use raw data – they consume it in processed form, using computers to aggregate or simplify the results. This means that increasingly the “interface” to computation is “visualised” through computational/information aesthetics techniques and visualisation, a software veil that hides the “making” of the digital computations involved. Indeed, today we see this increasingly being combined with realtime contextual sensors, history and so forth into “cards” and other push notification systems that create forms of just-in-time memory/cognitive processes.
These are new forms of invisible interface/ ubiquitous computing/ enchanted objects which use context to present user with predictive media and information in real-time. The aim, we might say, is to replace forethought by reconfiguring/replacing human “secondary memory” and thinking with computation. That is, the crucial half-second of pre-conscious decision-forming processes whereby we literally “make up our own minds” is today subject to the unregulated and aggressive targeting of the programming industry. This temporally located area of the processes of mind we might call the “enlightenment moment” as it is the fraction of a second that creates the condition of possibility for independent thought and reflexivity itself. Indeed, far from being science-fiction this is now the site of new technologies in the process of being constructed, current examples including: Google Now, Apple Siri, MindMeld, Tempo, etc. Not to mention the aggressive targeting by advertising companies of this area, but more worryingly of new generation entelechies who are still developing their critical or reflexive skills, such as children and young people. This, of course, raises important questions about whether these targeted computation systems and contextual processes should be regulated in law in relation to the young. These are not new issues in relation to the regulation of the minds of children, but the aggressiveness of computational devices and the targeting of this forethought by the programming industries raises the stakes further, indeed as Stiegler quotes,

after decades of struggle in civil society, governments have been forced to regulate air pollution, food and water,… few governments have shown themselves capable of regulating marketing practices targetting children. This situation has left industry free to decide what children watch on television, what products they are offered in order to distract them, what strategies can be used to manipulate their wishes, desires, and values (Brodeur, quoted in Stiegler 2010: 88)

For example, in the UK, with the turn to a competitive model of higher education, literally each university also begins to compete for an “audience” of students to take its courses, and for which the students now pay a considerable sum of money to be both educated and entertained. We could say that the universities become, in effect, another branch of the cultural industry. This represents a dangerous moment for the creation of critical attention, the possibility of reflexivity and enlightenment, in as much as increasingly students receive from the lecturer but do not need to participate; they await their educational portions which are easy to swallow, and happily regurgitate them in their assessments. The students are taught that they are the consumers of a product (rather than the product of education themselves as reflexive citizens in majority), and that this service industry, the university, is there to please them, to satisfy their requirements. How could this be otherwise when they are expected to fill in survey after survey, market research questions to determine how “satisfied” they are, how “happy”, and “content” they are with their consumption. Which remains, finally, the delivery of the best possible product, the first class degree, the A marks, the final certificate covered in gilt which will deliver them the best paying job possible.  The university itself becomes a device, an interface between consumer and producer, and which too becomes highly technologised as it seeks to present a surface commodity layer to its consuming students. It is in this context that MOOCs (Massive Open Online Courses) should be understood and critiqued as they represent only the public face of changes taking place on the inside of universities at all levels.

The new imaginaries of highly invasive congnitve technologies are already being conceptualised as the “Age of Context” within the programming industries. Indeed, under this notion all material objects are carriers of signal to be collected and collated into computational systems, even the discarded, the trash, etc. contains RFID chips that can provide data for contextual systems. But more importantly, the phones we carry, the mobile computers and the tablets, now built with a number of computational sensors, such as GPS, compasses, gyroscopes, microphones, cameras, wifi, radio transmitters and so forth, enable a form of contextual awareness to be technically generated through massive real-time flows of data. For example, in the US Presidential election on 6/11/2012, Twitter recorded 31 million election-related Tweets from users of the streaming –  327,452 Tweets per minute (TPM) (Twitter 2012) all of which can be fed to the user. In a real-time stream ecology, such as Twitter, the notion of the human is already contested and challenged by a form of “hyper attention” in contrast to the ‘deep attention’ of previous ages. Indeed, the user is constantly bombarded with data. This is increasingly understood as a lack within human capabilities which needs to be remedied using yet more technology – real-time streams need visualisation, cognitive assistants, push notification, dashboard interfaces, and so forth.

Google Now and the Notification “Cards”

This much heralded “Age of Context” is being built upon the conditions of possibility made feasible by distributed computing, cloud services, smart devices, sensors, and new programming practices around mobile technologies. This new paradigm of anticipatory computing stresses the importance of connecting up multiple technologies that provide data from real-time streams and APIs (Application Programming Interfaces) to enable a new kind of intelligence within these technical devices. A good example of this is given by Google’s new “Google Now” product, which attempts to think “ahead” of the user by providing algorithmic prediction based on past user behavior, preferences, Google search result history, smart device sensors, geolocation, and so forth. As Google explains,

Google Now gets you just the right information at just the right time. It tells you today’s weather before you start your day, how much traffic to expect before you leave for work, when the next train will arrive as you’re standing on the platform, or your favorite team’s score while they’re playing. And the best part? All of this happens automatically. Cards appear throughout the day at the moment you need them (Google 2012).

These new technologies form a constellation that creates new products and services, new tastes and desires, and the ability to make an intervention into forethought – to produce the imaginary that Google names “Augmented Humanity” (Eaton 2011). In some senses this follows from the idea that after ‘human consciousness has been put under the microscope, [it has been] exposed mercilessly for the poor thing it is: a transitory and fleeting phenomenon’ (Donald, quoted in Thrift 2006: 284). The idea of augmented humanity and contextual computing are intended to remedy this ‘problem’ in human cognitive ability. Here the technologists are aware that they need to tread carefully as Eric Schmidt, Google’s ex-CEO, revealed “Google policy is to get right up to the creepy line and not cross it” (Schmidt, quoted in Richmond 2010). The “creepy line” is the point at which the public and politicians think a line has been crossed into surveillance, control, and manipulation, by capitalist corporations – of course, internally Google’s experimentation with these technologies is potentially much more radical and invasive. These new technologies need not be as dangerous as they might seem at first glance, and there is no doubt that the contextual computing paradigm can be extremely useful for users in their busy lives – acting more like a personal assistant than a secret policeman. Indeed, Shel Israel argues that this new ‘Age of Context’ is an exciting new augmented world made possible by the confluence of a number of competing technologies. He writes that contextual computing is built on,

[1] social media, [2] really smart mobile devices, [3] sensors, [4] Big Data and [5] mapping. [Such that] the confluence of these five forces creates a perfect storm whose sum is far greater than any one of the parts (Israel 2012).

These technologies are built on complex intertwined webs of software tie together these new meta-systems which abstract (are built) from:

  • the social layer, such as Twitter and Facebook,
  • the ambient data collection layer, using the sensors in mobile devices,
  • the web layer, the existing and future web content and technologies,
  • the notification layer, enabling reconciliation and unification of real-time streams,
  • the app layer, which is predominantly made up of single-function apps, 

These various layers are then loosely coupled to interoperate in unexpected but “delightful” perceived fashion, such as experienced with the conversation interfaces, such as Apple Siri, which have both an element of “understanding”, but also contextual information about their environment. Critically engaging with this age of context is difficult due to the distributed software, material objects, “enchanted” objects, black-boxed systems and computational “things” that make it up. The threads that hold these systems together are not well understood as a totality nor their new calculative dashboards (e.g. notification interfaces). Indeed, we can already discern new forms of power that are tentatively visible in this new context layer, enabling new political economic actors, and a new form of intensive exploitation, such as that demonstrated by the intensification of the pre-cognitive moment discussed earlier.

Iconic New Aesthetic Image from Google Earth

I have argued previously that moments like the “new aesthetic”, glitches (Berry 2011, 2012a, 2012b, 2012c), and with others that exceptions and contextual failure are useful to begin mapping these new systems (Berry et al 2012a; Berry et al 2013). The black box of these exhaustive systems is spun around us in wireless radio networks and RFID webs – perhaps doubly invisible. We need to critique moments in exhaustive media that are connected to particular forms of what we might call “exhaustive” governmentality, self-monitoring and life-hacking practices, aesthetic, political, social, economic, etc. but also the way in which they shape the generational entelechies. For example, this could be through the creation of an affective relation with real-time streaming ecologies and a messianic mode of media. Indeed, we might say that anticipatory computing creates a new anticipatory subject, which elsewhere I have called a riparian citizen (Berry 2011: 144).

Indeed, it seems to me that mapping how computation contributes to new generational entelechies and functions to limit their ability to critically reflect on their own historical dimension of the social process is a crucial problem, for example where hegemonic rhetorics of the digital – “new aesthetic”, “pixels”, “sound waves” and so forth, are widely used to convince and seldom challenged. Indeed this contributes to a wider discussion of how medial changes create epistemic changes. For me, this project remains linked to a critical ontology of ourselves as ethos, a critical philosophical life and the historical analysis of imposed limits to reach towards experiments with going beyond current conditions and limits (Foucault 1984). Indeed, the possibility of a “digital” enlightenment ethos needs to be translated into coherent “labor of diverse inquiries”, one of which is the urgent focus on the challenge to thinking represented by the intensification of the programming industries on the “enlightenment moment” of our prethought. This requires methodological approaches, which could certainly draw on the archeological and genealogical analysis of practices suggested by Foucault (1984) but also on the technological and strategic practices associated with shaping both the policies and concrete technologies themselves – perhaps, if not necessarily “Evil Media” (Fuller and Goffey 2012), then certainly critical software and political praxis. Last, and not least, is the theoretical moment required in developing the conceptual and critical means of defining unique forms of relations to things, others, ourselves (Foucault 1984) that are not limited by the frame of computationality.

Bibliography

BBC (2012) iPhone Apps Path and Hipster Offer Address-book Apology, BBC, 9 February 2012, http://www.bbc.co.uk/news/technology-16962129

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave.

Berry, D. M. (2012a) Abduction Aesthetic: Computationality and the New Aesthetic, Stunlaw, accessed 18/04/2012, http://stunlaw.blogspot.co.uk/2012/04/abduction-aesthetic-computationality.html

Berry, D. M. (2012b) Computationality and the New Aesthetic, Imperica, accessed 18/04/2012. http://www.imperica.com/viewsreviews/david-m-berry-computationality-and-the-new-aesthetic
Berry, D. M. (2012c) Understanding Digital Humanities, London: Palgrave.

Berry, D. M., Dartel, M. v., Dieter, M., Kasprzak, M. Muller, N., O’Reilly, R., and Vicente, J. L (2012) New Aesthetic, New Anxieties, Amsterdam: V2 Press

Berry, D. M., Dieter, M., Gottlieb, B., and Voropai, L. (2013) Imaginary Museums, Computationality & the New Aesthetic, BWPWAP, Berlin: Transmediale.

Eaton, K. (2012) The Future According to Schmidt: “Augmented Humanity,” Integrated into Google, Fast Company, 25 January 2011, http://www.fastcompany.com/1720703/future-according-schmidt- augmented-humanity-integrated-google

Foucault, M. (1984) What is Enlightenment?, in Rabinow P. (ed.) The Foucault Reader, New York, Pantheon Books, pp. 32-50.

Fuller, M. and Goffey, A. (2012) Evil Media, MIT Press.

Google (2012) Google Now, Google, 2012, http://www.google.com/landing/now/

Israel, S. (2012) Age of Context: Really Smart Mobile Devices, Forbes, 5 September 2012, http://www.forbes.com/sites/shelisrael/2012/09/05/age-of-context-really-smart-mobile-devices/

Kant, I (1991) An Answer to the Question: What is Enlightenment?, in Kant: Political Writings, Cambridge: Cambridge University Press.

Mannheim, K. (1952) The Problem of Generations,  in Kecskemeti, P. (ed.) Karl Mannheim: Essays, London: Routledge, pp 276-322, accessed 15/02/2013, http://www.history.ucsb.edu/faculty/marcuse/classes/201/articles/27MannheimGenerations.pdf

Mannheim, K. (1967) Ideology and Utopia, London: Harvest.

Mitcham, C. (1998) The Importance of Philosophy to Engineering’, Teorema, Vol. XVII/3 (Autumn, 1998).

Richmond, S. (2010) Eric Schmidt: Google Gets Close to ‘the Creepy Line’, The Telegraph, 5 October 2010, http://blogs.telegraph.co.uk/technology/shanerichmond/100005766/eric-schmidt-getting- close-to-the-creepy-line/

Stiegler, B. (2010) For a New Critique of Political Economy, Cambridge: Polity Press.

Stiegler, B. (2010) Taking Care of Youth and the Generations, Cambridge: Polity Press.

Thrift, N. (2006) Re-inventing Invention: New Tendencies in Capitalist Commodification, Economy and Society, 35.2 (May, 2006): 284.

Wauters, R. (2012) 427 Million Europeans are Now Online, 37% Uses More than One Device: IAB, The Next Web, 31 May 2012, http://thenextweb.com/eu/2012/05/31/427-million-europeans-are-now- online-37-uses-more-than-one-device-iab/

Wiedemann, C. and Zehle, S. (2012) Depletion Design: A Glossary of Network Ecologies, Amsterdam: Institute for Network Cultures

Critical Digital Humanities

Critical Digital Humanities is an approach to the study and use of the digital which is attentive to questions of power, domination, myth and exploitation, what has been called the “The Dark Side of the Digital Humanities” (Chun 2013; Grusin 2013; Jagoda 2013; Raley 2013). It develops an interdisciplinary approach which includes:

  • Critical Theory
  • Theoretical work on Race, Ethnicity, Gender, Sexuality and Class (TransformDH 2013).
  • The historical, social, political, and cultural contexts around digital transformations
  • Work that is both research and practice-led
  • Is reflexive to its own historical context and theoretical limitations
  • Has a commitment to political praxis
  • Theoretical work and “building things”
  • Technologically engaged work, including critical approaches such as software studies/critical code studies. 
  • Cultural/Critical Political Economy
As such critical digital humanities seeks to address the concerns expressed by Lui (2012) and others that digital humanities lacks a cultural critique (see Golumbia 2012). Liu argued,

While digital humanists develop tools, data, and metadata critically, therefore (e.g., debating the “ordered hierarchy of content objects” principle; disputing whether computation is best used for truth finding or, as Lisa Samuels and Jerome McGann put it, “deformance”; and so on) rarely do they extend their critique to the full register of society, economics, politics, or culture (Liu 2012).

Thus Liu asks, “how [can] the digital humanities advance, channel, or resist today’s great postindustrial, neoliberal, corporate, and global flows of information-cum-capital” and why is this “a question rarely heard in the digital humanities associations, conferences, journals, and projects” (Liu 2012). The aim of critical digital humanities outlined here is not to offer a prescription on a final approach, rather it is to begin to enumerate the plurality of approaches within such a field, and more specifically to a constellation of concepts related to a notion of “digital humanities”, the softwarization of the humanities more generally and the post-digital. Indeed critical digital humanities could, paraphrasing Grusin (2013) slightly,

help to redefine our traditional humanistic practices of history, critique, and interpretation, so these humanistic traditions can help to refine and shape the direction and critical focus of digital humanities and its place in the institutional infrastructure of the academy (Grusin 2013). 

Bibliography
Chun, W. (2013) The Dark Side of the Digital Humanities, accessed 10/01/2013, http://www.c21uwm.com/2013/01/09/the-dark-side-of-the-digital-humanities-part-1/
Golumbia, D. (2012) Why Digital Humanities Hates Literary and Cultural Studies: 
Grusin, R. (2013) The Dark Side of the Digital Humanities, accessed 10/01/2013, http://www.c21uwm.com/2013/01/09/dark-side-of-the-digital-humanities-part-2/
Jagoda, P. (2013) The Dark Side of the Digital Humanities, accessed 10/01/2013, http://www.c21uwm.com/2013/01/09/the-dark-side-of-the-digital-humanities-part-3/
Liu, A. (2012) Where Is Cultural Criticism in the Digital Humanities?, Gold, M. K. (ed.) Debates in the Digital Humanities, accessed 11/1/2013, http://dhdebates.gc.cuny.edu/debates/text/20
Raley, R. (2013) The Dark Side of the Digital Humanities, accessed 10/01/2013, http://www.c21uwm.com/2013/01/09/the-dark-side-of-the-digital-humanities-part-4/
TransformDH (2013) #TransformDH: Transformative Digital Humanities: Doing Race, Ethnicity, Gender, Sexuality and Class in DH, accessed 11/1/2013, http://transformdh.org

The New Aesthetic: A Maieutic of Computationality

Screen testing at main stage for the Republican convention in Tampa, Fla (2012)

Many hasty claims are now being made that the new aesthetic is over, finished, or defunct. I think that as with many of these things we will have to wait and see to the extent to which the new aesthetic is “new”, an “aesthetic”, used in practice, or has any trajectory associated with it. For me, the responses it generates are as interesting as the concept of the new aesthetic itself.

And regarding the “remembering” (perhaps, territorialization) of new media and previous practices, let’s not forget that forgetting things (deteritorialization) can be extremely productive, both theoretically and in everyday practice (as elpis, perhaps, if not as entelechy of new generations). Indeed, forgetting can be like forgiving,[1] and in this sense can allow the absorption or remediation of previous forms (a past bequeathed by the dead) that may have been contradictory or conflictual to be transcended at a higher level (this may also happen through a dialectical move, of course).[2] This is, then, a politics of memory as well as an aesthetic.

But the claim that “NA is that it seems to be all gesture and no ideology” is clearly mistaken. Yes, NA is clearly profoundly gestural and is focused on the practice of doing, in some sense, even if the doing is merely curatorial or collecting other things (as archive/database of the present). The doing is also post-human in that algorithms and their delegated responsibility and control appears to be a returning theme (as the programming industry, as logics of military colonisation of everyday life, as technical mediation, as speed constitutive of absolute past, or as reconstitution of knowledge itself). It is also ideological to the extent that is an attempt to further develop a post-human aesthetic (and of course, inevitably this will/may/should end in failure) but nonetheless reflects in interesting ways a process of cashing out the computational in the realm of the aesthetic – in some senses a maieutic of computational memory, seeing and doing (a “remembering” of glitch ontology or computationality).

As to the charge of the inevitability of historicism to counter the claims of the new aesthetic, one might wish to consider the extent to which the building of the new aesthetic may share the values of computer science (highly ideological, I might add) and which is also profoundly ahistorical and which enables the delegation of the autonomy of the new aesthetic (as code/software) as a computational sphere. But this is not to deny the importance of critical theory here, far from it, but rather it is to raise a question about computation’s immunity to the claims that critical approaches inevitably make – as Ian Bogost recently declared (about a different subject), are these not just “self-described radical leftist academics” and their “predictable critiques”. Could not the new aesthetics form an alliance here with object-oriented ontology?

Within this assemblage, the industrialisation of programming and memory becomes linked to the industrialisation of “seeing” (and here I am thinking of mediatic industries). What I am trying to gesture towards, if only tentatively, is that if the new aesthetic, as an aesthetic of the radically autonomous claims of a highly computational post-digital society, might format the world in ways which profoundly determine, if not offer concrete tendencies, towards an aesthetic which is immune to historicism – in other words the algorithms aren’t listening to the humanists – do we need to follow Stephen Ramsay’s call for Humanists to build?

Here I point to both the industrialisation of memory but also the drive towards a permanent revolution in all forms of knowledge that the computational industries ceaselessly aim towards. That is, the new aesthetic may be a reflexive sighting (the image, the imaginary, the imagined?) and acknowledgement of the mass-produced temporal objects of the programming industries, in as much as they are shared structures, forms, and means, that is, algorithms and codes, that construct new forms of reception in terms that consciousness and collective unconsciousness will increasingly correspond.

Notes

[1] “Forgiving is the only reaction which does not merely re-act but acts anew and unexpectedly, unconditioned by the act which provoked it and therefore freeing from its consequences both the one who forgives and the one who is forgiven” (Hannah Arendt, The Human Condition, page 241), “and if he trespass against thee… and… turn against to thee, saying, I changed my mind; thou shalt release him” (Luke 17: 3-4)
[1] Here I am thinking in terms of Mannheim’s concept of “Generation Entelechy” and “Generation Unit” to consider the ways in which the quicker the tempo of social cultural change, here understood as represented through digital technology, the greater the chances that a particular generation location’s group will react to changed circumstances by producing their own entelechy. 

Coping Tests as a Method for Software Studies

In this post I want to begin to outline a method for software reading that in some senses can form the basis of a method in software studies more generally. The idea is to use the pragmata of code, combined with its implicit temporality and goal-orientedness to develop an idea of what I call coping tests. This notion draws from the idea developed by Heidegger, as “coping” being a specific means of experiencing that takes account of the at-handiness (zuhandenheit) of equipment (that is entities/things/objects which are being used in action)  – in other words coping tests help us to observe the breakdowns of coded objects. This is useful because it helps us to think about the way software/code is in some senses a project that is not just static text on a screen, but a temporal structure that has a past, a processing present, and a futural orientation to the completion (or not) of a computational task. I want to develop this in contrast to attempts by others to focus on the code through either through a heavily textual approach (and critical code studies tends towards this direction), or else a purely functionality driven approach (which can have idealist implications in some forms, whereby a heavily mathematised approach tends towards a platonic notion of form).

In my previous book, The Philosophy of Software (Berry 2011), I use obfuscated code as a helpful example, not as a case of unreadable reading, or even for the spectacular, rather I use it as a stepping off point to talk about the materiality of code through the notion of software testing. Obfuscated code is code deliberately written to be unreadable to humans but perfectly readable to machines. This can take the form of a number of different approaches, from simply mangling the text (from a human point of view), to using distraction techniques, such as confusing or deliberately mislabeling variables, functions, calls, etc. It can even take the form of aesthetic effects, like drawing obvious patterns, streams, and lines in the code, or forming images through the arrangement of the text.

Testing is a hugely important part of the software lifecycle and links the textual source code to the mechanic software and creates the feedback cycle between the two. This I linked to Callon and Latour (via Boltanski and Thevenot) use of the notion of ‘tests’ (or trials of strength) – implying that it is crucially the running of these obfuscated code programs that shows that they are legitimate code (they call these legitimate tests), rather than nonsense. The fact that they are unreadable by humans and yet testable is very interesting, more so as they become aesthetic objects in themselves as the programmers start to create ASCII art both as a way of making the code (unreadable), now readable as an image, but also adding another semiotic layer to the meaning of the code’s function.

The nature of coping that these tests imply (as trials of strength) combined with the mutability of code is then constrained through limits placed in terms of the testing and structure of the project-orientation. This is also how restrictions are delegated into the code which serve as what Boltanski and Thevenot can then be retested through ‘trials of strength’. The borders of the code are also enforced through tests of strength which define the code qua code, in other words as the required/tested coded object. It is important to note that these also can be reflexively “played with” in terms of clever programming that works at the borderline of acceptability for programming practices (hacking is an obvious example of this).

In other words testing as coping tests can be understood in two different modes, (i) ontic coping tests: which legitimate and approval the functionality and content of the code, in other words that the code is doing what it should, so instrumentally, ethically, etc. But we need to work and think at a number of different levels, of course, from unit testing, application testing, user interface testing, and system testing, more generally in addition to taking account of the context and materialities that serve as conditions of possibility for testing (so this could take the form of a number of approaches, including ethnographies, discursive approaches, etc.).; and (ii) ontological coping tests: which legitimate the code qua code, that it is code at all, for example, authenticating that the code is the code we think it is – we can think of code signing as an example of this, although it has a deeper significance as the quiddity of code. This then has a more philosophical approach towards how we can understand, recognise or agree on the status of code as code and identify underlying ontological structural features, etc.

For critical theory, I think tests are a useful abstraction as an alternative (or in addition to) the close reading of source code. This can be useful in a humanities perspective for teaching some notions of ‘code’ through the idea of ‘iteracy’ for reading code, and will be discussed throughout my new book, Critical Theory and the Digital, in relation to critical readings of software/code opened up through the categories given by critical theory. But this is also extremely important for contemporary critical researchers and student, who require a much firmer grasp of computational principles in order to understand the economy, culture and society which has become softwarized, but also more generally for the humanities today, where some knowledge of computation is becoming required to undertake research.

One of the most interest aspects of this approach, I think, is that it helps sidestep the problems associated with literally reading source code, and the problematic of computational thinking in situ as a programming practice. Coping tests can be developed within a framework of “depth” in as much as different kinds of tests can be performed by different research communities, in some senses this is analogous to a test suite in programming. For example, one might have UI/UX coping tests, functionality coping tests, API tests, forensic tests (linking to Matthew Kirschenbaum’s notion of forensic media), and even archaeological coping tests (drawing from media archaeology, and particularly theorists such as Jussi Parikka) – and here I am thinking both in terms of coping tests written in the present to “test” the “past”, as it were, but also there must be an interesting history of software testing, which could be reconceptualised through this notion of coping tests, both as test scripts (discursive) but also in terms of software programming practice more generally, social ontologies of testing, testing machines, and so forth.[1] We might also think about the possibilities for thinking in terms of social epistemologies of software (drawing on Steve Fuller’s work, for example).

As culture and society are increasingly softwarized, it seems to me that it is very important that critical theory is able to develop concepts in relation to software and code, as the digital. In a later post I hope to lay out a framework for studying software/code through coping tests and a framework/method with case studies (which I am developing with Anders Fagerjord, from IMK, Oslo University).

Notes

[1] Perhaps this is the beginning of a method for what we might call software archaeology. 

New Book: Life in Code and Software: Mediated life in a complex computational ecology

Life in Code and Software (cover image by Michael Najjar)

New book out in 2012 on Open Humanities PressLife in Code and Software: Mediated life in a complex computational ecology. 

 

This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. Life in Code and Software introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological, which I call computationality, and within which, code and software become the paradigmatic forms of knowing and doing. Such that other candidates for this role, such as: air, the economy, evolution, the environment, satellites, etc., are understood and explained through computational concepts and categories.

 

 

 

Code, Foucault and Neoliberal Governmentality

For Foucault, Neoliberal governmentality is a particular form of post-welfare state politics in which the state essentially outsources the responsibility for ensuring the ‘well-being’ of the population. The primary recipient of this responsibility is derived from a strengthened notion of the subject, as a rational individual. Indeed, these new subjectivities are expected to ‘look after themselves’. This form of governmentality has an extremely diffuse form of rule whereby strategies and imperatives of control are distributed through a variety of media but are implicated in even the most mundane practice of everyday life. As Schecter writes,

 

Foucault regards the exercise of power and the formalisation of knowledge to be intimately bound up with the constitution of living individuals as subjects of knowledge, that is, as citizens and populations about whom knowledge is systematically constructed… Subjects are not born subjects so much as they become them. In the course of becoming subjects they are classified in innumerable ways which contribute to their social integration, even if they are simultaneously marginalised in many cases (Schecter 2010: 171).

 

So for example, the state promotes an ethic of self-care which is justified in terms of a wider social responsibility and which is celebrated through the examples given in specific moments represented as individual acts of consumption that contribute to a notion of good citizenship. So using recycling bins, caring for one’s teeth, stopping smoking, and so forth are all actively invested by the state as both detrimental to individual and collective care, but most importantly they are the responsibility of the citizen to abide by.

 

Neoliberal governmentality also gestures towards the subordination of state power to the requirements of the marketplace, the implication being that ‘political problems’ are re-presented or cast in market terms. Within this framework citizens are promised new levels of freedom, consumerism, customisation, interactivity and control over their life and possessions. In other words, they are promised an unfulfilled expectation as to the extent to which they are able to exert their individual agency.

 

In order to facilitate this governmental platform certain infrastructural systems need to be put in place, bureaucratic structures, computational agencies and so forth. For example, it has become increasingly clear that providing information to citizens is not sufficient for controlling and influencing behaviour. Indeed, people’s ability to understand and manipulate raw data or information has been found to be profoundly limited in many contexts with a heavy reliance on habit understood as part of the human condition.

 

It is here that the notion of compactants (computational actants) allows us to understand the way in which computationality has increasingly become constitutive of the understanding of important categories in late capitalism, like privacy and self-care. Here we could say that we are interested in a transition from the juridicification, through the medicalisation, to the ‘computationalisation’ of reason. Hence, following Foucault, we are interested in studying the formation of discrete powers rather than power in general. That is, Foucault is interesting ‘in the processes through which subjects become subjects, the truth becomes truth, and then changing conditions under which this happens, which in the first instance is the discrepancy between the visible and the readable’ (Schecter 2010: 173). Or as Foucault himself writes:

 

What is at stake in all this research about madness, illness, delinquency, and sexuality, as well as everything else I have been talking about today, is to show how the coupling of a series of practices with a truth regime forms an operative knowledge-power system (dispotif) which effectively inscribes in the real something that does not exist, and which subjects the real to a series of criteria stipulating what is true and what is false, whereby these criteria are taken to be legitimate. It is that moment which does not exist as real and which is not generally considered relevant to the legitimacy of a regime of true and false, it is that moment in things that engages me at the moment. It marks the birth of the asymmetrical bi-polarity of politics and economics, that is, of that politics and economics which are neither things that exist nor are errors, illusions or ideologies. It has to do with something which does not exist and which is nonetheless inscribed within the real, and which has great relevance for a truth regime which makes distinctions between truth and falsity (Foucault, The Birth of Bio-Politics, quoted in Schecter 2010: 173).

 

Indeed the way in which compactants generate certain notion of truth and falsity is a topic requiring close investigation, both in terms of the surface interface generating a ‘visible’ truth, and the notion of a computational, or cloud, truth that is delivered from the truth-machines that lie somewhere on the networks of power and knowledge.

Foucault suggests that if there is a ‘system’ or ensemble of systems, the task is somehow to think systemic functioning outside of the the perspective of the subject dominated by or in charge of the so-called system. Critical thinking can deconstruct the visible harmony between casual seeing and instrumental reason… in contrast with monolithic appearances, surfaces are characterised by strata and folds that can inflect power to create new truths, desires and forms of experience (Schecter 2010: 175).

Here we can make the link between sight and power, and of course sight itself is deployed such that the ‘visible’ is not transparent nor hidden. Compactants certainly contribute to the deployment of the visible, through the generation of certain forms of geometric and photographic truths manifested in painted screens and surfaces.

 

 

 

 

 

Bibliography

 

Schecter, D. (2010) The Critique of Instrumental Reason from Weber to Habermas, New York: Continuum.

Computational Thinking: Some thoughts about Abduction

One of the striking features of computation is the extent to which forms of pattern matching are required in computer processing. Pattern recognition can be described as a means of identifying repeated shapes or structures which are features of a system under investigation. Whilst we tend to think of patterns as visual, of course they can also be conceptual, iterative, representational, logical, mathematical, etc. in form providing the underlying computational system can be programmed to recognise the distinctive shape of the pattern from the data. They can also consist of meta-patterns as described by Gregory Bateson as patterns that can detected across different spheres, such as culture, humanities, science and the social or ‘the pattern that connects’ (see Bateson 1979; Dixon 2012). The recognition of patterns and uncovering their relationships in sets of data was called ‘abductive reasoning’ by Charles Peirce, who contrasted it with inductive and deductive reasoning. Indeed, Peirce described abduction as a kind of logical inference akin to guessing. This he called the leap of abduction where by one could abduce A from B if A is sufficient (or nearly sufficient) but not necessary for B. The possible uses of this within a computational context should be fairly obvious, especially when software is handling partial, fuzzy or incomplete data and needs to generate future probabilistic decision points, or recognise important features or contours in a data set.

Charles Sanders Peirce (1839–1914) argued that pattern matching, which he called abduction or retroduction (he also used the terms presumption or hypothesis), was a type of hypothesis formation. The crucial function of ‘a pattern of abduction … consists in its function as a search strategy which leads us, for a given kind of scenario, in a reasonable time to a most promising explanatory conjecture which is then subject to further test’ (Schurz 2008, 205). Peirce argued,

Abduction is the process of forming an explanatory hypothesis.  It is the only logical operation which introduces any new idea; for induction does nothing but determine a value, and deduction merely evolves the necessary consequences of a pure hypothesis. Deduction proves that something must be; Induction shows that something actually is operative; Abduction merely suggests that something may be (Pearce 1958: 5.171, original emphasis).

Or perhaps better:

The abductive suggestion comes to us like a flash. It is an act of insight, although extremely fallible insight. It is true that the different elements of the hypothesis were in our minds before; but it is the idea of putting together what we had never before dreamed of putting together which flashes the new suggestion before our contemplation (Pearce 1988: 227, original emphasis).

In effect, abduction is the process of arriving at an explanatory hypothesis or a process of generating a hypothesis. As Eldridge explains,

For Peirce, abduction works from these surprising facts to determine a possible, plausible explanation.  Furthermore, Peirce stresses the fact that the logic of abduction is fallible – abductive inferences, like induction, can, and do, lead us to the wrong result (Pearce 1958 5.189, 5.197, 6.532).  However, as a part of the triad, abduction is able to correct itself, once it is investigated by deduction and tested by induction (Pearce 1958 5.574).  Because of this, we should never take the conclusion of an abductive inference to be a fact in and of itself until it is tested.  Until that point “abduction commits us to nothing…it merely causes a hypothesis to be set down upon our docket of cases to be tried” (Pearce 1958 5.602).  Furthermore, by hypothesis, Peirce does not just mean scientific hypotheses.  Abduction certainly includes the more formalized, conscious cognitive process of deliberately searching for an explanation to a set of particular facts; however, abduction is also a logical inference used in everyday life from crude hypotheses (his Catholic priest example) to perceptual judgments (understanding the information that we receive from our senses) (Pearce 1958 7.202, 5.180, 5.184) (Eldridge n.d.). 

Patterns were made popular as a heuristic for thinking about the new problematics introduced by software systems through the work of the architect Christopher Alexander (1936-), particularly Notes on the Synthesis of Form (Alexander 1964), The Timeless Way of Building (Alexander 1979), and A Pattern Language (Alexander et al. 1977) which influenced computer scientists, who found useful parallels between building design and the practice of software design (Rybczynski 2009)Alexander’s central premise in his books, ‘driving over thirty years of thoughts, actions, and writings, is that there is something fundamentally wrong with twentieth century architectural design methods and practices’ (Lea 1997). Indeed, A Pattern Language was originally written to enable any citizen to design and construct their own home although it is arguable that he has had more influence on computer scientists than architects. As Appleton explains, patterns ‘are a literary form of software engineering problem-solving [approach] that has its roots in a design movement of the same name in contemporary architecture… [they enable a] common vocabulary for expressing its concepts, and a language for relating them together. The goal of patterns within the software community is to create a body of literature to help software developers resolve recurring problems encountered throughout all of software development’ (Appleton 2000).

The Timeless Way of Building and A Pattern Language were written as a pair, with the former presenting rationale and method, and the latter concrete details. They present a fresh alternative to the use of standardized models and components, and accentuate the philosophical, technical and social-impact differences between analytic methods and the adaptive, open, and reflective (all in several senses) approach to design. The term pattern is a preformal construct (Alexander does not ever provide a formal definition) describing sets of forces in the world and relations among them. In Timeless, Alexander describes common, sometimes even universal patterns of space, of events, of human existence, ranging across all levels of granularity. A Pattern Language contains 253 pattern entries. Each entry might be seen as an in-the-small handbook on a common, concrete architectural domain. Each entry links a set of forces, a configuration or family of artifacts, and a process for constructing a particular realization. Entries intertwine these ‘problem space’, ‘solution space’, and ‘construction space’ issues in a simple, down-to-earth fashion, so that each may evolve concurrently when patterns are used in development (Lea 1997).

Patterns are therefore reusable, structured, or formalised ways of doing things or processing information and data. Alexander himself defined each pattern as:

a three-part rule, which expresses a relation between a certain context, a problem, and a solution. As an element in the world, each pattern is a relationship between a certain context, a certain system of forces which occurs repeatedly in that context, and a certain spatial configuration which allows these forces to resolve themselves. As an element of language, a pattern is an instruction, which shows how this spatial configuration can be used, over and over again, to resolve the given system of forces, wherever the context makes it relevant. The pattern is, in short, at the same time a thing, which happens in the world, and the rule which tells us how to create that thing, and when we must create it. It is both a process and a thing; both a description of a thing which is alive, and a description of the process which will generate that thing (Alexander 1979: 247).

The antithesis to a pattern, is called an anti-pattern, that is patterns that describe (i) a bad solution to a problem which resulted in a bad situation, or (ii) describe how to get out of a bad situation and how to proceed from there to a good solution (Appleton 2000; Brown et al. 1998).  Patterns and pattern languages provide a broader framework to think about questions of paradigmatic means of designing and implementing computational systems. Indeed, in many cases patterns are used in this way to indicate a set of means for the development of software at a macro level. It should also be noted that patterns can be combined with other patterns to produce new patterns at a higher level of complexity, indeed this is the idea behind Alexander’s (1977) notion of a ‘pattern language’. Within software design it is quite common to see three levels noted, namely from most abstract to more concrete: Architectural Patterns, Design Patterns and Implementation Patterns, the last being detailed, programming-language-specific patterns as idioms (Microsoft 2012).

Within computer science, and particularly related to the more micro level problem of recognising patterns themselves within data sets automatically using computation, is an important and challenging area of research. The main forms of pattern recognition (we can think of these as patterns to find patterns) used in computation are usually enumerated as template-matching, prototype matching, feature analysis, recognition by components, fourier analysis, and lastly bottom-up and top-down processing. I’ll briefly describe each of the six main approaches.

Template Matching: This is where a computational device uses a set of images (or templates) against which it can compare a data set, which might be an image for example (for examples of an image set, see Cole et al. 2004).

Template Matching (Jahangir 2008)

Prototype Matching: This form of patten matching uses a set of prototypes, which are understood as an average characteristic of a particular object or form. The key is that there does not need to be a perfect match merely a high probability of likelihood that the object and prototype are similar (for an example, see Antonina et al. 2003).

Feature Analysis: In this approach a variety of approaches are combined including detection, pattern dissection, feature comparison, and recognition. Essentially the source data is broken into key features or patterns to be compared with a library of partial objects to be matched with (for examples, see Morgan n.d.).

Recognition by Components: In this approach objects are understood to be made up of what are called ‘geons’ or geometric primitives. A sample of data or images is then processed through feature detectors which are programmed to look for curves, edges, etc. or through a geo detector which looks for simple 2D or 3D forms such as cylinders, bricks, wedges, cones, circles, and rectangles (see Biederman 1987).

Fourier Analysis: This form of pattern matching uses algorithms to decompose something into smaller pieces which can then be selectively analysed. This decomposition process itself is called the Fourier transform.  For example, an image might be broken down into a set of twenty squares across the image field, each of which being smaller, is made faster to process. As Moler (2004) argues, ‘we all use Fourier analysis every day without even knowing it. Cell phones, disc drives, DVDs, and JPEGs all involve fast finite Fourier transforms’. Fourier transformation is also used to generate a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of the digital image. The Fourier components of each square are then rounded to lower arithmetic precision, and weak components are discarded, so that the remaining components can be stored in much less computer memory or storage space. To reconstruct the image, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image, this is why the image can produce ‘blocky’ or the distinctive digital artefacts in the rendered image, see JPEG (2012).

Bottom-up and Top-down Processing: Finally, in the Bottom-up and Top-down methods an interpretation emerges from the data, this is called data-driven or bottom-up processing. Here the interpretation of a data set to be determined mostly by information collected, not by your prior models or structures being fitted to the data, hence this approach looks for repeated patterns that emerge from the data. The idea is that starting with no knowledge the software is able to learn to draw generalisations from particular examples. Alternatively an approach where prior knowledge or structures are applied data is fitted into these models to see if there is a ‘fit’. This approach is sometimes called schema-driven or top-down processing. A schema is a pattern formed earlier in a data set or drawn from previous information (Dewey 2011).

What should be apparent from this brief discussion of the principles of abduction and pattern-matching in computer science is their creative possibilities for generating results from data sets. The ability to generate  hypothesises on the basis of data, which is fallible and probabilistic allows for computational devices to generate forecasts and predictions based on current and past behaviours, data collection, models, and images. It is this principle of abductive reason which makes computational reasoning different from instrumental reason, and particularly from the iron-cage of logical implication or programmatic outcome that instrumental reason suggests. Indeed Alexander that the most useful patterns are generative,

These patterns in our minds are, more or less, mental images of the patterns in the world: they are abstract representations of the very morphological rules which define the patterns in the world. However, in one respect they are very different. The patterns in the world merely exist. But the same patterns in our minds are dynamic. They have force. They are generative. They tell us what to do; they tell us how we shall, or may, generate them; and they tell us too, that under certain circumstances, we must create them. Each pattern is a rule which describes what you have to do to generate the entity which it defines. (Alexander 1979: 181-182)

Bibliography

Alexander, C. (1964) Notes on the Synthesis of Form, Harvard University Press.

Alexander, C., S. Ishikawa, & M. Silverstein (1977) A Pattern Language, Oxford: Oxford University Press.

Alexander, C. (1979) The Timeless Way of Building, Oxford: Oxford University Press.

Antonina, K., Barbro, B., Hannu, V., Jarmo, t. and Ari, V. (2003) Prototype-Matching System for Allocating Conference Papers, accessed 31/03/2012, http://www.hicss.hawaii.edu/HICSS36/HICSSpapers/DTDMI06.pdf

Appleton, B. (2000) Patterns and Software: Essential Concepts and Terminology, accessed 31/03/2012, http://www.cmcrossroads.com/bradapp/docs/patterns-intro.html

Bateson, G. (1979) Mind and Nature: A Necessary Unity, (Advances in Systems Theory, Complexity, and the Human Sciences). Hampton Press, accessed 31/03/2012, http://www.oikos.org/mind&nature.htm

Biederman, I. (1987) Recognition-by-Components: A Theory of Human Image Understanding, Psychological Review, 1987, Vol. 94, No. 2,115-147, accessed 31/03/2012, http://www.cim.mcgill.ca/~siddiqi/COMP-558-2012/rbc.pdf

Brown, W., Malveau, R., McCormick, H. and Mowbray, T. (1998) AntiPatterns, accessed 31/03/2012, http://www.antipatterns.com/

Cole, L, Austin, D., Cole, L. (2004) Visual Object Recognition using Template Matching, accessed 31/03/2012, http://www.araa.asn.au/acra/acra2004/papers/cole.pdf

Dewey, R. A. (2011) Top-down and Bottom-up Processing http://www.intropsych.com/ch07_cognition/top-down_and_bottom-up_processing.html

Dixon, D. (2012) Analysis Tool or Design Methodology? Is there an epistemological basis for patterns?, in Berry, D. M. (ed.) Understanding Digital Humanities, London: Palgrave.

Eldridge, M. (n.d.) Clarifying the Process of Abduction and Understanding “Inductive” Generalization, accessed 31/03/2012, http://www.philosophy.uncc.edu/mleldrid/SAAP/USC/TP26.html

Janhangir, N. (2008) Genetic Algorithm Driven Template Matching In ActionScript 3.0, accessed 31/03/2012, http://nadimissimple.wordpress.com/2008/12/11/genetic-algorithm-driven-template-matching/

JPEG (2012) JPEG Homepage, accessed 31/03/2012, http://www.jpeg.org/jpeg/index.html

Lea, D. (1977) Christopher Alexander: An Introduction for Object-Oriented Designers, accessed 31/03/2012, http://g.oswego.edu/dl/ca/ca/ca.html

Microsoft (2012) Organizing Patterns, accessed 01/04/2012, http://msdn.microsoft.com/en-us/library/ff647589.aspx

Moler, C. (2004) Numerical Computing with MATLAB, accessed 31/03/2012, http://www.mathworks.se/moler/chapters.html

Morgan, M. (n.d.) Feature Analysis, accessed 31/03/2012, http://www.staff.city.ac.uk/~morgan/FeatureAnalysis.pdf

Peirce, C. S. (1958) The Collected Works of Charles Sanders Peirce, Harvard University Press.

Peirce, C. S. (1988) Pragmatism as the Logic of Abduction, in The Essential Peirce: Selected Philosophical Writings, 1893—1913, Bloomington: Indiana University Press.

Rybczynski, W. (2009) Do You See a Pattern?, Slate, accessed 31/03/2012, http://www.slate.com/articles/arts/architecture/2009/12/do_you_see_a_pattern.html

Schurz, G. (2008) Patterns of Abduction, Synthese, 164 (2): 201-234.

Advertisements