Category Archives: the new aesthetic

On Capture

In thinking about the conditions of possibility that make possible the mediated landscape of the post-digital (Berry 2014) it is useful to explore concepts around capture and captivation, particularly as articulated by Rey Chow (2012). Chow argues the being “captivated” is

the sense of being lured and held by an unusual person, event, or spectacle. To be captivated is to be captured by means other than the purely physical, with an effect that is, nonetheless, lived and felt as embodied captivity. The French word captation, referring to a process of deception and inveiglement [or persuade (someone) to do something by means of deception or flattery] by artful means, is suggestive insofar as it pinpoints the elusive yet vital connection between art and the state of being captivated. But the English word “captivation” seems more felicitous, not least because it is semantically suspended between an aggressive move and an affective state, and carries within it the force of the trap in both active and reactive senses, without their being organised necessarily in a hierarchical fashion and collapsed into a single discursive plane (Chow 2012: 48). 

To think about capture then is to think about the mediatized image in relation to reflexivity. For Chow, Walter Benjamin inaugurated a major change in the the conventional logic of capture, from a notion of reality being caught or contained in the copy-image, such as in a repository, the copy-image becomes mobile and this mobility adds to its versatility. The copy-image then supersedes or replaces the original as the main focus, as such this logic of the mechanical reproduction of images undermines hierarchy and introduces a notion of the image as infinitely replicable and extendable.  Thus the “machinic act or event of capture” creates the possibility for further dividing and partitioning, that is for the generation of copies and images, and sets in motion the conditions of possibility of a reality that is structured around the copy.

Chow contrasts capture to the modern notion of “visibility” such that as Foucault argues “full lighting and the eyes of a supervisor capture better than darkness, which ultimately protected. Visibility is a trap” (Foucault 1991: 200). Thus in what might be thought of as the post-digital – a term that Chow doesn’t use but which I think is helpful in thinking about this contrast – what is at stake is no longer this link between visibility and surveillance, indeed nor is the link between becoming-mobile and the technology of images, but rather the collapse of the “time lag” between the world and its capture.

This is when time loses its potential to “become fugitive” or “fossilised” and hence to be anachronistic. The key point being that the very possibility of memory is disrupted when images become instantaneous and therefore synonymous with an actual happening. Thus in a condition of the post-digitial, whereby digital technologies make possible not only the instant capture and replication of an event, but also the very definition of the experience through its mediation both at the moment of capture – such as with the waving smart phones at a music concert or event  – but also in the subsequent recollection and reflection on that experience.

Thus the moment of capture or “arrest” is an event of enclosure, locating and making possible the sharing and distribution of a moment through infinite reproduction and dissemination. So capture represents a techno-social moment but is also discursive in that it is a type of discourse that is derived from the imposition of power on bodies and the attachment of bodies to power. This Chow calls a heteronomy or heteropoiesis, as in a system or artefact designed by humans, with some purpose, but not able to self-reproduce but which is yet able to exert agency in the form of prescription often back onto its designers. Essentially producing an externality in relation to the application of certain “laws” or regulations.

Nonetheless, capture and captivation also constitute a critical response through the possibility of a disconnecting logic and the dynamics of mimesis. This possibility reflected through the notion of entanglements refers to the “derangements in the organisation of knowledge caused by unprecedented adjacency and comparability or parity”. This is, of course, definitional in relation to the notion of computation when itself works through a logic of formatting, configuration, structuring and the application of computational ontologies (Berry 2011, 2014).

Here capture offers the possibility of a form of practice in relation to alienation by making the inquirer adopt a position of criticism, the art of making strange. Chow here is making links to Brecht and Shklovsky, and in particular their respective predilection for estrangement in artistic practice, such as in Brecht’s notion of verfremdung, and thus to show how things work, whilst they are being shown (Chow 2012: 26-28). In this moment of alienation the possibility is thus raised of things being otherwise. This is the art of making strange as a means to disrupt the everyday conventionalism and refresh the perception of the world – art as device. The connections between techniques of capture and critical practice as advocated by Chow, and reading or writing the digital are suggestive in relation to computation more generally, not only in artistic practice but also in terms of critical theory. Indeed, capture could be a useful hinge around which to subject the softwarization practices, infrastructures and experiences of computation to critical thought both in terms of their technical and social operations but also to the extent to which they generate a coercive imperative for humans to live and stay alive under the conditions of a biocomputational regime.

Bibliography

Berry, D. M. (2011) The Philosophy of Software, London: Palgrave.

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury.

Chow, R. (2012) Entanglements, or Transmedial Thinking about Capture, London: Duke University Press.

Foucault, M. (1991) Discipline and Punish, London: Penguin Social Sciences.

Advertisements

Signposts for the Future of Computal Media

I would like to begin to outline what I think are some of the important trajectories to keep an eye on in regard to what I increasingly think of as computal media. That is, the broad area dependent on computational processing technologies, or areas soon to be colonised by such technologies.

In order to do this I want to examine a number of key moments that I want to use to structure thinking about the softwarization of media.  By “softwarization”, I means broadly the notion of Andreessen (2011) that “software is eating the world” (see also Berry 2011; Manovich 2013).  Softwarization is then a process of the application of computation (see Schlueter Langdon 2003), in this case, to all forms of historical media, but also in the generation of born-digital media. 
However, this process of softwarization is tentative, multi-directional, contested, and moving on multiple strata at different modularities and speeds. We therefore need to develop critiques of the concepts that drive these processes of softwarization but also to think about what kind of experiences that make the epistemological categories of the computal possible. For example, one feature that distinguishes the computal is its division into surfaces, rough or pleasant, and concealed inaccessible structures. 
It seems to me that this task is rightly one that is a critical undertaking. That is, as an historical materialism that understands the key organising principles of our experience are produced by ideas developed with the array of social forces that human beings have themselves created. This includes understanding the computal subject as an agent dynamically contributing and responding to the world. 
So I want to now look at a number of moments to draw out some of what I think are the key developments to be attentive to in computal media. That is, not the future of new media as such, but rather “possibilities” within computal media, sometimes latent but also apparent. 
The Industrial Internet
A new paradigm called the “industrial internet” is emerging, a computational, real-time streaming ecology that is reconfigured in terms of digital flows, fluidities and movement. In the new industrial internet the paradigmatic metaphor I want to use is real-time streaming technologies and the data flows, processual stream-based engines and the computal interfaces and computal “glue” holding them together. This is the internet of things and the softwarization of everyday life and represents the beginning of a post-digital experience of computation as such.
This calls for us to stop thinking about the digital as something static, discrete and object-like and instead consider ‘trajectories’ and computational logistics. In hindsight, for example, it is possible to see that new media such as CDs and DVDs were only ever the first step on the road to a truly computational media world. Capturing bits and disconnecting them from wider networks, placing them on plastic discs and stacking them in shops for us to go visit and buy seems bizarrely pedestrian today. 
Taking account of such media and related cultural practices becomes increasing algorithmic and as such media becomes itself mediated via software. At the same time previous media forms are increasingly digitalised and placed in databases, viewed not on original equipment but accessed through software devices, browsers and apps. As all media becomes algorithmic, it is subject to monitoring and control at a level to which we are not accustomed – e.g. Amazon mass deletion of Orwell’s1984 from personal Kindles in 2009 (Stone 2009).

The imminent rolling out of the sensor-based world of the internet of things is underway with companies such as Broadcom developing Wireless Internet Connectivity for Embedded Devices, “WICED Direct will allow OEMs to develop wearable sensors — pedometers, heart-rate monitors, keycards — and clothing that transmit everyday data to the cloud via a connected smartphone or tablet” (Seppala 2013). Additionally Apple is developing new technology in this area with its iBeacon software layer which uses Bluetooth Low Energy (BLE) to create location-aware micro-devices, and “can enable a mobile user to navigate and interact with specific regions geofenced by low cost signal emitters that can be placed anywhere, including indoors, and even on moving targets” (Dilger 2013). In fact, the “dual nature of the iBeacons is really interesting as well. We can receive content from the beacons, but we can be them as well” (Kosner 2013).  This relies on Bluetooth version 4.0, also called “Bluetooth Smart”, that supports devices that can be powered for many months by a small button battery, and in some cases for years. Indeed,

BLE is especially useful in places (like inside a shopping mall) where GPS location data my not be reliably available. The sensitivity is also greater than either GPS or WiFi triangulation. BLE allows for interactions as far away as 160 feet, but doesn’t require surface contact (Kosner 2013).

These new computational sensors enable Local Positioning Systems (LPS) or micro-location, in contrast to the less precise technology of Global Positioning Systems (GPS). These “location based applications can enable personal navigation and the tracking or positioning of assets” to the centimetre, rather than the metre, and hence have great potential as tracking systems inside buildings and facilities (Feldman 2009).

Bring Your Own Device (BYOD)
This shift also includes the move from relatively static desktop computers to mobile computers and to tablet based devices – consumerisation of tech. Indeed, according to the International Telecommunications Union (ITU 2012: 1), in 2012 there were 6 billion mobile devices (up from 2.7 billion in 2006), with YouTube alone streaming video media of 200 terrabytes per day. Indeed, by the end of 2011, 2.3 billion people (i.e. one in three) were using the Internet (ITU 2012: 3).
Users are creating 1.8 zettabytes of data annually by 2011 and this is expected to grow to 7.9 zettabytes by 2015 (Kalakota 2011). To put this in perspective, a zettabyte is is equal to 1 billion terabytes – clearly at these scales the storage sizes become increasingly difficult for humans to comprehend. A zettabyte is roughly equal in size to twenty-five billion Blu-ray discs or 250 billion DVDs.

The acceptance by users and providers of the consumerisation of technology has also opened up the space for the development of “wearables” and these highly intimate devices are under current development, with the most prominent example being Google Glass. Often low-power devices, making use of the BLE and iBeacon type technologies, they augment our existing devices, such as the mobile phone, rather than outright replacing them, but offer new functionalities, such as fitness monitors, notification interfaces, contextual systems and so forth. 

The Personal Cloud (PC)
These pressures are creating an explosion in data and a corresponding expansion in various forms of digital media (currently uploaded to corporate clouds). As a counter move to the existence of massive centralised corporate systems there is a call for Personal Cloud (PCs), a decentralisation of data from the big cloud providers (Facebook, Google, etc.) into smaller personal spaces (see Personal Cloud 2013). Conceptually this is interesting in relation to BYOD. 
This of course changes our relationship to knowledge, and the forms of knowledge which we keep and are able to use. Archives are increasingly viewed through the lens of computation, both in terms of cataloging and storage but also in terms of remediation and configuration. Practices around these knowledges are also shifting, and as social media demonstrates, new forms of sharing and interaction are made possible. Personal Cloud also has links to decentralised authentication technologies (e.g. DAuth vs OAuth).
Digital Media, Social Reading, Sprints
It has taken digital a lot longer that many had thought to provide a serious challenge to print, but it seems to me that we are now in a new moment in which digital texts enable screen-reading, if it is not an anachronism to still call it that, as a sustained reading practice. The are lots of experiments in this space, e.g. my notion of the “minigraph” (Berry 2013) or the mini-monograph, technical reports, the “multigraph” (McCormick 2013), pamphlets, and so forth. Also new means for writing (e.g. Quip) and social reading and collaborative writing (e.g. Book Sprints)
DIY Encryption and Cypherpunks
Together, these technologies create contours of a new communicational landscape appearing before us, and into which computational media mediates use and interaction. Phones become smart phones and media devices that can identify, monitor and control our actions and behaviour  through anticipatory computing. Whilst seemingly freeing us, we are also increasingly enclosed within an algorithmic cage that attempts to surround us with contextual advertising and behavioural nudges.
One response could be “Critical Encryption Practices”, the dual moment of a form of computal literacy and understanding of encryption technologies and cryptography combined with critical reflexive approaches. Cypherpunk approaches tend towards an individualistic libertarianism, but there remains a critical reflexive space opened up by their practices. Commentators are often dismissive of encryption as a “mere” technical solution to what is also a political problem of widespread surveillance. 
CV Dazzle Make-up, Adam Harvey
However, Critical encryption practices could provide both the political, technical and educative moments required for the kinds of media literacies important today – e.g. in civil society. 
This includes critical treatment of and reflection on crypto-systems such as cryptocurrencies like Bitcoin, and the kinds of cybernetic imaginaries that often accompany them. Critical encryption practices could also develop signaling systems – e.g. new aesthetic and Adam Harvey’s work. 
Augmediated Reality
The idea of supplementing or augmenting reality is being transformed with the notion of “augmediated” technologies (Mann 2001). These are technologies that offer a radical mediation of everyday life via screenic forms (such as “Glass”) to co-construct a computally generated synoptic meta-reality formed of video feeds, augmented technology and real-time streams and notification. Intel’s work of Perceptual Computing is a useful example of this kind of media form. 
The New Aesthetic
These factors raise issues of new aesthetic forms related to the computal. For example, augmediated aesthetics suggests new forms of experience in relation to its aesthetic mediation (Berry et al 2012). The continuing “glitch” digital aesthetic remains interesting in relation to the new aesthetic and aesthetic practice more generally (see Briz 2013). Indeed, the aesthetics of encryption, e.g. “complex monochromatic encryption patterns,” the mediation of encryption etc. offers new ways of thinking about the aesthetic in relation to digital media more generally and the post-digital (see Berry et al 2013)
Bumblehive and Veillance
Within a security setting one of the key aspects is data collection and it comes as no surprise that the US has been at the forefront of rolling out gigantic data archive systems, with the NSA (National Security Agency) building the country’s biggest spy centre at its Utah Data Center (Bamford 2012) – codenamed Bumblehive. This centre has a “capacity that will soon have to be measured in yottabytes, which is 1 trillion terabytes or a quadrillion gigabytes” (Poitras et al 2013). 
This is connected to the notion of the comprehensive collection of data because, “if you’re looking for a needle in the haystack, you need a haystack,” according to Jeremy Bash, the former CIA chief of staff. The scale of the data collection is staggering and according to Davies (2013) the UK GCHQ has placed, “more than 200 probes on transatlantic cables and is processing 600m ‘telephone events’ a day as well as up to 39m gigabytes of internet traffic. Veillance – both surveillance and sousveillence are made easier with mobile devices and cloud computing. We face rising challenges for responding to these issues. 
The Internet vs The Stacks
The internet as we tend to think of it has become increasingly colonised by massive corporate technology stacks. These companies, Google, Apple, Facebook, Amazon, Microsoft, are called collectively “The Stacks” (Sterling, quoted in Emami 2012) – vertically integrated giant social media corporations. As Sterling observes,

[There’s] a new phenomena that I like to call the Stacks [vertically integrated social media]. And we’ve got five of them — Google, Facebook, Amazon, Apple and Microsoft. The future of the stacks is basically to take over the internet and render it irrelevant. They’re not hostile to the internet — they’re just [looking after] their own situation. And they all think they’ll be the one Stack… and render the others irrelevant… They’re annihilating other media… The Lords of the Stacks (Sterling, quoted in Emami 2012).

The Stacks also raise the issue of resistance and what we might call counter-stacks,  hacking the stacks, and movements like Indieweb and Personal Cloud computing are interesting responses to them and Sterling optimistically thinks, “they’ll all be rendered irrelevant. That’s the future of the Stacks” (Sterling, quoted in Emami 2012). 
The Indieweb
The Indieweb is a kind of DIY response to the Stacks and an attempt to wrestle back some control back from these corporate giants (Finley 2013). These Indieweb developers offer an interesting perspective on what is at stake in the current digital landscape, somewhat idealistic and technically oriented they nonetheless offer a site of critique. They are also notable for “building things”, often small scale, micro-format type things, decentralised and open source/free software in orientation. The indieweb is, then, “an effort to create a web that’s not so dependent on tech giants like Facebook, Twitter, and, yes, Google — a web that belongs not to one individual or one company, but to everyone” (Finley 2013).
Push Notification
This surface, or interactional layer, of the digital is hugely important for providing the foundations through which we interact with digital media (Berry 2011). Under development are new high-speed adaptive algorithmic interfaces (algorithmic GUIs) that can offer contextual information, and even reshape the entire interface itself, through the monitoring of our reactions to computational interfaces and feedback and sensor information from the computational device itself – e.g. Google Now. 
The Notification Layer
One of the key sites for reconciliation of the complexity of real-time streaming computing is the notification layer, which will increasingly by an application programming interface (API) and function much like a platform. This is very much the battle taking place between the “Stacks”, e.g. Google Now, Siri, Facebook Home, Microsoft “tiles”, etc. With the political economy of advertising being transformed with the move from web to mobile, notification layers threaten revenue streams. 
It is also a battle over subjectivity and the kind of subject constructed in these notification systems.
Real-time Data vs Big Data
We have been hearing a lot about “big data” and related data visualisation, methods, and so forth. Big data (exemplified by the NSA Prism programme) is largely a historical batch computing system. A much more difficult challenge is real-time stream processing, e.g. future NSA programmes called SHELLTRUMPET, MOONLIGHTPATH, SPINNERET and GCHQ Tempora programme. 
That is, monitoring in real-time, and being able to computationally spot patterns, undertake stream processing, etc.
Contextual Computing
With multiple sensors built into new mobile devices (e.g. camera, microphones, GPS, compass, gyroscopes, radios, etc.) new forms of real-time processing and aggregation become possible.  In some senses then this algorithmic process is the real-time construction of a person’s possible “futures” or their “futurity”, the idea, even, that eventually the curation systems will know “you” better than you know yourself – interesting for notions of ethics/ethos. This the computational real-time imaginary envisaged by corporations, like Google, that want to tell you what you should be doing next…
Anticipatory Computing
Our phones are now smart phones, and as such become media devices that can also be used to identify, monitor and control our actions and behavior  through anticipatory computing. Elements of subjectivity, judgment and cognitive capacities are increasingly delegated to algorithms and prescribed to us through our devices, and there is clearly the danger of a lack of critical reflexivity or even critical thought in this new subject. This new paradigm of anticipatory computing stresses the importance of connecting up multiple technologies to enable a new kind of intelligence within these technical devices. 
Towards a Critical Response to the Post-Digital
Computation in a post-digital age is fundamentally changing the way in which knowledge is created, used, shared and understood, and in doing so changing the relationship between knowledge and freedom. Indeed, following Foucault (1982) the “task of philosophy as a critical analysis of our world is something which is more and more important. Maybe the most certain of all philosophical problems is the problem of the present time, and of what we are, in this very moment… maybe to refuse what we are” (Dreyfus and Rabinow 1982: 216). 
One way of doing this is to think about Critical Encryption Practices, for example, and the way in which technical decisions (e.g. plaintext defaults on email) are made for us. The critique of knowledge also calls for us to question the coding of instrumentalised reason into the computal. This calls for a critique of computational knowledge and as such a critique of the society producing that knowledge. 
Bibliography
Andreessen, M. (2011) Why Software Is Eating The World, Wall Street Journal, August 20 2011, http://online.wsj.com/article/SB10001424053111903480904576512250915629460.html#articleTabs%3Darticle
Bamford, J. (2012) The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say), Wired, accessed 19/03/2012, http://www.wired.com/threatlevel/2012/03/ff_nsadatacenter/all/1
Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave Macmillan.
Berry, D. M. (2013) The Minigraph: The Future of the Monograph?, Stunlaw, accessed 29/08/2013, http://stunlaw.blogspot.nl/2013/08/the-minigraph-future-of-monograph.html
Berry, D. M., Dartel, M. v., Dieter, M., Kasprzak, M. Muller, N., O’Reilly, R., and Vicente, J. L (2012) New Aesthetic, New Anxieties, Amsterdam: V2 Press.
Berry, D. M., Dieter, M., Gottlieb, B., and Voropai, L. (2013) Imaginary Museums, Computationality & the New Aesthetic, BWPWAP, Berlin: Transmediale.
Briz, N. (2013) Apple Computers, accessed 29/08/2013, http://nickbriz.com/applecomputers/
Davies, N. (2013) MI5 feared GCHQ went ‘too far’ over phone and internet monitoring, The Guardian, accessed 22/06/2013, http://www.guardian.co.uk/uk/2013/jun/23/mi5-feared-gchq-went-too-far
Dilger, D.E. (2013) Inside iOS 7: iBeacons enhance apps’ location awareness via Bluetooth LE,
AppleInsider, accessed 02/09/2013, http://appleinsider.com/articles/13/06/19/inside-ios-7-ibeacons-enhance-apps-location-awareness-via-bluetooth-le

Emami, G (2012) Bruce Sterling At SXSW 2012: The Best Quotes, The Huffington Post, accessed 29/08/2013, http://www.huffingtonpost.com/2012/03/13/bruce-sterling-sxsw-2012_n_1343353.html
Feldman, S. (2009) Micro-Location Overview: Beyond the Metre…to the Centimetre, Sensors and Systems, accessed 02/09/2013, http://sensorsandsystems.com/article/columns/6526-micro-location-overview-beyond-the-metreto-the-centimetre.html

Finley, K. (2013) Meet the Hackers Who Want to Jailbreak the Internet, Wiredhttp://www.wired.com/wiredenterprise/2013/08/indie-web/
ITU (2012) Measuring the Information Society, accessed 01/01/2013, http://www.itu.int/ITU-D/ict/publications/idi/material/2012/MIS2012-ExecSum-E.pdf
Kalakota, R. (2011) Big Data Infographic and Gartner 2012 Top 10 Strategic Tech Trends, accessed 05/05/2012, http://practicalanalytics.wordpress.com/2011/11/11/big-data-infographic-and-gartner-2012-top-10-strategic-tech-trends

Kosner, A. W. (2013) Why Micro-Location iBeacons May Be Apple’s Biggest New Feature For iOS 7, Forbes, accessed 02/09/2013, http://www.forbes.com/sites/anthonykosner/2013/08/29/why-micro-location-ibeacons-may-be-apples-biggest-new-feature-for-ios-7/

Mann, S. (2001) Digital Destiny and Human Possibility in the Age of the Wearable Computer, London: Random House.


Manovich, L. (2013) Software Takes Command, MIT Press.
McCormick, T. (2013) From Monograph to Multigraph: the Distributed Book, LSE Blog: Impact of Social Sciences, accessed 02/09/2013, http://blogs.lse.ac.uk/impactofsocialsciences/2013/01/17/from-monograph-to-multigraph-the-distributed-book/

Personal Cloud (2013) Personal Clouds, accessed 29/08/2013, http://personal-clouds.org/wiki/Main_Page
Poitras, L., Rosenbach, M., Schmid, F., Stark, H. and Stock, J. (2013) How the NSA Targets Germany and Europe, Spiegel, accessed 02/07/2013, http://www.spiegel.de/international/world/secret-documents-nsa-targeted-germany-and-eu-buildings-a-908609.html
Schlueter Langdon, C. 2003. Does IT Matter? An HBR Debate–Letter from Chris Schlueter Langdon. Harvard Business Review (June): 16, accessed 26/08/2013, http://www.ebizstrategy.org/research/HBRLetter/HBRletter.htm and http://www.simoes.com.br/mba/material/ebusiness/ITDOESNTMATTER.pdf
Seppala, T. J. (2013) Broadcom adds WiFi Direct to its embedded device platform, furthers our internet-of-things future, Engadget, accessed 02/09/2013, http://www.engadget.com/2013/08/27/broadcom-wiced-direct/

Stone, B. (2009) Amazon Erases Orwell Books From Kindle, The New York Times, accessed 29/08/2013, http://www.nytimes.com/2009/07/18/technology/companies/18amazon.html?_r=0

The Minigraph: The Future of the Monograph?


It has taken digital a lot longer that many had thought to provide a serious challenge to print, but it seems to me that we are now in a new moment in which digital texts enable screen-reading, if it is not an anachronism to still call it that, as a sustained reading practice. Here, I am thinking particularly of the way in which screen technologies, including the high resolution retina displays common on iPhones, kindle e-ink, etc., combined with much more sensitive typesetting design practices in relation to text, are producing long-form texts that are pleasurable to read on a screen-based medium and as ebooks. This has happened most noticeably in magazine articles and longer newspaper features, but is beginning to drift over into well designed reading apps that we find on our mobile devices, such as Pocket and the “Reader” function in Safari. With this change, finally questions are seriously being asked about our writing practices, especially in terms of the assumptions and affordances that are coded into software word-processors, such as Microsoft Word, which assumes, if not enforces, a print medium mentality onto the writing practice. Word wants you to print the documents you write, and this prescriptive behaviour by the software, encourages us to “check” our documents on a “real” paper form before committing to it – even if the final form would have been a digital PDF format. The reason is that even the humble PDF is also designed for printing, as anyone who has tried to read a PDF document on a digital screen will attest, with its clunky and ill-formated structure, that actively fights against a user trying to resize a document to read. But when the reading practices of screen media are sufficient, then many of the assumptions of screen writing can be jettisoned, and with them the most disruptive and unpredictable will be the practice of writing for paper.

For there is little doubt that writing and reading the screen is different from print (see Berry 2012; Gold 2012). These differences are not just found at a technical level, for they also include certain forms of social practice, such as reading in public, passing around documents, sharing ideas and so forth. They also include the kinds of social signalling that digital documents have been very poor at incorporating into their structures, such as the cover, the publisher, the “name”, or a striking design or image. Nonetheless, certainly at the present phase of digital texts, I think it is in the typesetting and typography, combined with the social reading practices that take place, such as social sharing, marking, copying/pasting, and commenting, that make digital suddenly a viable way of creating and consuming textual works. In some ways, the social signalling of the cover artwork, etc. has been subsumed into social media such as Facebook and Twitter, but I think that it is a matter of time before this is incorporated into mobile devices in some way when the price of screen technologies, especially an e-ink back cover, can be built for pennies. But to return to the texts themselves, the question of writing, of putting pen to paper, an ironic phrase if ever there was one, is on the cusp of radical change. The long thirty year period of stable writing-software created by the virtual monopoly that Microsoft gained over desktop computers, most noticeably represented by Windows, its desktop operating system, and Office, its productivity suite, is drawing to a close. From its initial introduction in 1983 on the Xenix system as Multi-Tool Word and renamed that year to the familiar Microsoft Word that we all know today (and often hate), print has been the lode star of word-processor design.

The next stage of digital text is unveiling before our eyes, and as it does, many of the textual apparatus of print are migrating to the digital platform, and as they do so the advantages of new search and discovery practices make books extremely visible and usable again, such as through Google Books (Dunleavy 2012). There is still a lot of experimentation in this space and some problems still remain, for example there is currently not a viable alternative to the “chunking” process of reading that print has taught us through pages and page numbering, nor is there a means of book marking that is as convenient as the obviousness of the changing weight of the book as it moves through our hands, or the visual clues afforded through the page volume changing from unread to read as we turn the pages. However, this has been mitigated in some ways by a turning away from very long-form, in terms of book or monograph length texts of around 80,000 words, to the moderate long-form, represented by the 15-40,000 word text which I want to call the minigraph.

By minigraph I am seeking to distinguish a specific length of text and therefore size of book that is able to move beyond the very real limitations of the 6-8,000 word article, and yet is not at such as length that the chunking problem of reading digital texts becomes too much of a problem. In other words, in its current stage of implementation, I think that digital long-form texts are most comfortable to read when they stay within this golden ratio of 15-40,000 words, broken into five or six chapters. The lack of chunking is still a problem, in my opinion, without helpful “page” numbers, and I don’t think that paragraph numbering has provided a usable solution to this, but the shortness of the text means that it is readable within a reasonable period of time, creating a de facto chunking at the level of the minigraph chapter (between 2,000 and 5,000 words). Indeed, the introduction of an algorithmic paging system that is device-independent would also be helpful, for example through a notion of “planes” which are analogous to pages but calculated in real-time (see Note 1 below). This would help sidestep the problem of fatigue in digital reading, apparent even in our retina/e-ink screen practices, but also creates works that are long enough to be satisfying to read and can offer both interesting discussion, digression and scholarly apparatus as necessary. Other publishers have already been experimenting with the form, such as Palgrave with its Pivot series, a new e-book format “at 30,000 to 50,000 words, it’s longer than a journal article but shorter than a traditional monograph. The Palgrave Pivot, said Hazel Newton, head of digital publishing, ‘fills the space in the middle'” (Cassuto 2013). Indeed, Stanford University Press has also started “to release new material in the form of midlength e-books. ‘Stanford Briefs‘ will run 20,000 to 40,000 words in length” which Cassuto (2013) similarly calls the “mini-monograph”.

The next step is clearly, how should one write a minigraph, considering the likelihood that Microsoft Word will algorithmically prescribe paper norms, which in academia tend to either 7,000 articles or 70,000 monographs. Here, I think Dieter (2013) is right to make links with the writing practices of Book Sprints as a connecting thread to new forms of publishing (see Hyde 2013). The Book Sprint is a “genre of the ‘flash’ book, written under a short timeframe, to emerge as a contributor to debates, ideas and practices in contemporary culture… interventions that go well beyond a well-written blog-post or tweet, and give some substantive weight to a discussion or issue… within a range of 20-40,000 words” (Berry and Dieter 2012). This rapid and collaborative means of writing is a very creative and intensified form of writing, but it also tends towards the creation of texts that appear to be at an “appropriate” size for the digital medium which makes those writing practices possible in the first place. Book Sprints themselves are usually formed from 4-8 people actively involved in the writing process, and which are facilitated by another non-writing member, and which conveniently maps onto the structure of minigraph chapters discussed earlier. For Dieter, the Book Sprint is conducive to new writing practices, and by extension new reading practices, for network cultures, and therefore “formations that break from subjugation or blockages in pre-existing media and organizational workflows” (Dieter 2013). In this I think he is broadly correct, however, Book Sprints also point toward certain forms of affordance towards textual productions that are conducive to reading and writing in a digital medium, and in the context of this discussion, the word count of a minigraph.

Nick Montfort (2013) has suggested a new, predominantly digital, form of writing that enables different forms of scholarly communication, in his case that of the technical report, which he argues “is as fast as a speeding blog, as detailed and structured as a journal article, and able to be tweeted, discussed, assessed, and used as much as any official publication can be. It is issued entirely without peer review”. Montfort, however, connects the technical report to the “grey literature” that is not usually considered part of scholarly publishing as such. Experiments, such as the “pamphlets” issued by the Stanford Media Lab, and which Montford argues are all but technical reports in name, seem to lie at between 10-15,000 words in length, slightly longer than a journal article, and yet a little shorter than a minigraph.

However, a key difference, or at least in the form in which I am considering the minigraph as a viable form of scholarly production, is that neither the Book Sprint nor the technical report are peer-reviewed, although they might be “peer-to-peer reviewed” (see Cebula 2010; Fitzpatrick 2011). Rather, they are rapid production, sharing and collaborative forms of document geared towards social media and intervention or technical documentation. In contrast, the minigraph would share with the other main scholarly outputs, of the journal article and the monograph, the need to be peer-reviewed and production at a high level of textual quality. This is where the minigraph points to new emergent affordances of the digital that enable the kinds of scholarly activity, such as presenting finished work, carefully annotated and referenced, supported and discursively presented, through these new nascent digital textual technologies. That is, that if these intuitions are right about the current state of digital technologies and their affordances for the writing and reading of scholarly work, then the minigraph might be a potential object with the right structure and form for digital scholarship to augment that of the article, review, monograph and so forth. Indeed, the minigraph might offer exactly the right kind of compromise for scholarly work that is called for by, for example, Drucker (2013) and Nardone and Fitzpatrick (2013) and point towards the new possibilities for writing beyond the “article” or the “book” that Robertson (2013) describes as “scholarship” which are institutionally constraining on academic creativity.

In some ways the minigraph seems to be a much less radical suggestion than the multi-modal, all singing and dancing digital object that many have been calling for or are describing. However, the minigraph, as conceptualised here, is actually potentially deeply computational in form, more properly we might describe the minigraph as a code-object. In this sense, the minigraph is able to contain programmable objects itself, in addition to its textual load, opening up many possibilities for interactive dimensions to its use and suggested by the computational document format (CDF) created by Wolfram. The minigraph as described here does not, of course, exist as such, although its form is detectable in, for example, the documents produced by the Quip app, or the dexy format, as “literate documentation”, or the Booktype software. It is manifestly not meant to be in the form of Google Docs/Drive, which is essentially traditional word-processing software in the cloud, and which ironically still revolves around a print metaphor. The minigraph is then, a technical imaginary for what digital scholarly writing might be, and which remains to be coded into concrete software and manifested in the practices of scholarly writers and readers. Nonetheless, as a form of long-form text amenable to the mobile practices of readers today, the 15-40,000 word minigraph text could provide a key expressive scholarly form for the digital age.

Notes

[1] The minigraph chunks would be at 250-350 word intervals, roughly pages, and chapters of 2-5,000 words. There is no reason why the term “page” could not be used for these chunks, but perhaps “plane” is more appropriate in terms of chunks representing vertical “cuts” in the text at an appropriate frequency. So “plane 5” would be analogous to page 5, but mathematically calculable to approximately (300 x plane number) to give start word, and ((300 x plane number+1)-1) to give the end word of a particular plane.  This would make the page both algorithmically calculable and therefore device independent, but also suitable for scholarly referencing and produce usable user-friendly numbering throughout the text. As the planes are represented on screen by a digital, the numbering system would immediately be comprehended by existing users of printed texts, and therefore offer a simple transition from paper page based numbering to algorithmic numbering of documents. If the document was printed, the planes could be automatically reformatted to the page size, and hence further make the link between page and plane straightforward for the reader who might never realise the algorithmic source of the numbering system for plane chunks in a minigraph. Indeed, one might place the “plane resolution” within the minigraph text itself, in this case “300”, enabling different plane chunks to be used within different texts, and hence changing the way in which a plane is calculated on a book by book basis – very similar to page numbering. One might even have different plane resolutions within chapters in a book enabling different chunks in different chapters or regions. 

Bibliography

Berry, D. M. (2012) Understanding Digital Humanities, London: Palgrave.

Berry, D. M. and Dieter, M. (2012) Book Sprinting, accessed 14/08/2013, http://www.booksprints.net/2012/09/everything-you-wanted-to-know/

Cassuto, L (2013) The Rise of the Mini-Monograph, The Chronicle of Higher Education, accessed 18/08/2013, http://chronicle.com/article/The-Rise-of-the-Mini-Monograph/141007/

Cebula, L. (2010) Peer Review 2.0,  accessed 14/08/2013, North West Historyhttp://northwesthistory.blogspot.co.uk/2010/09/peer-review-20.html

Dieter, M. (2013) Book Sprints, Post-Digital Scholarship and Subjectivation, Hybrid Publishing Lab, accessed 14/08/2013, http://hybridpublishing.org/2013/07/book-sprints-post-digital-scholarship-and-subjectivation/

Dunleavy, P. (2012) Ebooks herald the second coming of books in university social science, LSE Review of Books, 18/08/2013, http://blogs.lse.ac.uk/lsereviewofbooks/2012/05/06/ebooks-herald-the-second-coming-of-books-in-university-social-science/

Drucker, J. (2013) Scholarly Publishing, Amodern, accessed 14/08/2013, http://amodern.net/article/scholarly-publishing-micro-units-and-the-macro-scale/

Fitzpatrick, K. (2011) Planned Obsolescence: Publishing, Technology, and the Future of the Academy, New York University Press.

Gold, M. K. (2012) Debates in the Digital Humanities, University of Minnesota Press.

Hyde, A. (2013) Book Sprints, accessed 14/08/2013, http://www.booksprints.net

Montfort, N. (2013) Beyond the Journal and the Blog, Amodern, accessed 14/08/2013, http://amodern.net/article/beyond-the-journal-and-the-blog-the-technical-report-for-communication-in-the-humanities/

Nardone, M., and Fitzpatrick, K. (2013) We Have Never Done It That Way Before, Amodern, accessed 14/08/2013, http://amodern.net/article/we-have-never-done-it-that-way-before/

Robertson, B. J. (2013) The Grammatization of Scholarship, Amodern, accessed 14/08/2013, http://amodern.net/article/the-grammatization-of-scholarship/

Setup Seminar: Understanding The New Aesthetic

A very enjoyable evening was spent at Setup, Utrecht, discussing the New Aesthetic with presentations by myself, Darko Fritz and Frank Kloos, organised by Daniëlle de Jonge. The discussion was opened up by Tijmen Schep who gave an interesting introduction to the main contours of the new aesthetic and explained why Setup had organised the evening lectures.

Darko Fritz tried to unpick the the claims of the new aesthetic to being either “new” or an “aesthetic” placing computer art and new media art within an art historical context. Frank Kloos gave a wonderful presentation with examples of the new aesthetic from a variety of different contexts, including datamoshing and recent use of the new aesthetic in music videos.

Overall the event was a great success with a really excellent audience composed on interesting people, experts and artists, and surprisingly the discussion around computation and the extent to which it has become part of everyday life was extremely vibrant and full of great contributions.

My earlier post on the New Aesthetic here.

Some pictures below.

Darko Fritz
Frank Kloos

Compos 68 in the audience.

Daniëlle de Jonge
Tijmen Schep

Against Remediation

A new aesthetic through Google Maps

In contemporary life, the social is a site for a particular form of technological focus and intensification. Traditional social experience has, of course, taken part in various forms of technical mediation, formatting and subject to control technologies. Think, for example, of the way in which the telephone structured the conversation, diminishing the value of proximity, whilst simultaneously intensifying certain kinds of bodily response and language use. It is important, then to trace media genealogies carefully and to be aware of the previous ways in which the technological and social have met – and this includes the missteps, mistakes, dead-ends, and dead media. This understanding of media, however, has increasingly been understood in terms of the notion of remediation, which has been thought to helpfully contribute to our thought about media change, whilst sustaining a notion of medium specificity. Bolter and Grusin (2000), who coined its contemporary usage, state,

[W]e call the representation of one medium in another remediation, and we will argue that remediation is a defining characteristic of the new digital media. What might seem at first to be an esoteric practice is so widespread that we can identify a spectrum of different ways in which digital media remediate their predecessors, a spectrum depending on the degree of perceived competition or rivalry between the new media and the old (Bolter and Grusin 2000: 45).

However, it seems to me that we now need to move beyond talk of the remediation of previous modes of technological experience and media, particularly when we attempt to understand computational media. I think that this is important for a number of reasons, both theoretical and empirical. Firstly, in a theoretical vein, the concept of remediation has become a hegemonic concept and as such has lost its theoretical force and value. Remediation traces its intuition from McLuhan’s notion that the content of a new media is an old media – McLuhan actually thought of “retrieval” as a “law” of media. But it seems to me that beyond a fairly banal point, this move has the effect of both desensitising us to the specificity and materiality of a “new” media, and more problematically, resurrecting a form of media hauntology, in as much as the old media concepts “possess” the new media form. Whilst it might have held some truth for the old “new” media, although even here I am somewhat sceptical, within the context of digital, and more particularly computational media, I think the notion is increasingly unhelpful. Secondly, remediation gestures toward a depth model of media forms, within which it encourages a kind of originary media, origo, to be postulated, or even to remain latent as an a priori. This enables a form of reading of the computational which justifies a disavowal of the digital, through a double movement of simultaneously exclaiming the newness of computational media, whilst hypostatizing a previous media form “within” the computational.

Thirdly, I do not believe that it accurately describe the empirical situation of computational media, and in fact obfuscates the specificity of the computational in relation to its structure and form. This has a secondary effect in as much as analysis of computational media is viewed through a lens, or method, that is legitimated through this prior claim to remediation. Fourthly, I think remediation draws its force through a reliance on an occularity, that is, remediation is implicitly visual in its conceptualisation of media forms, and the way in which one media contains another, relies on a deeply visual metaphor. This is significant in relation to the hegemony of the visual form of media in the twentieth century. Lastly, and for this reason, I think it is time for us to historicize the concept of remediation. Indeed remediation seems to me to be a concept appropriate to the technologies of media of the twentieth century, and shaped by the historical context of thinking about media in relation to the materialities of those prior media forms and the constellation of concepts which appeared appropriate to them. We need to think computational media in terms which de-emphasize, or certainly reduce the background assumptions to remediation as something akin to a looking glass, and think in terms of a medium as an agency or means of doing something – this means thinking beyond the screenic.

So in this paper, in contrast to talk about “remediation”, and in the context of computational media, I want to think about de-mediation, that is, when a media form is no longer dominant, becoming marginal, and later absorbed/reconstructed in a new medium which en-mediates it. By en-mediate I want to draw attention to the securing of the boundaries related to a format, that is a representation, or mimesis of a previous media – but it is not the “same”, nor is it “contained” in the new media. This distinction is important as at the moment of enmediation, computational categories and techniques transform the newly enmediated form – I am thinking here, for example, of the examples given by the new aesthetic and related computational aesthetics. By enmediate I want to draw links with Heidegger’s notion of enframing (Gestell) and the structuring providing by a condition of possibility, that is a historical constellation of concepts.  I also want to highlight the processual computational nature of en-mediation, in other words, enmediation requires constant work to stabilize the enmediated media. In this sense, computational media is deeply related to enmediation as a total process of mediation through digital technologies. One way of thinking about enmediation is to understand it as gesturing towards a notion of a paradigmatic shift in the way in which “to mediate” should be understood, and which does not relate to the “passing through”, or “informational transfer” as such, but rather enmediate, in this discussion, aims to enumerate and uncover the specificity of computational mediation as mechanic processing.

I therefore want to move quickly to thinking about what it means to enmediate the social. By the term “social” I am particularly thinking in terms of the meditational foundations for sociality that were made available in twentieth century media, and which when enmediated become something new. So sociality is not remediated, it is enmediated – that is the computational mediation of society is not the same as the mediation processes of broadcast media, rather it has a specificity that is occluded if we rely on the concept of remediation to understand it. Thus, it is not an originary form of sociality that is somehow encoded within media (or even constructed/co-constructed), and which is re-presented in the multiple remediations that have occurred historically. Rather it is the enmediation of specific forms of sociality, which in the process of enmediation are themselves transformed, constructed and made possible in a number of different and historically specific modes of existence.

Bibliography
Bolter, J. D. and Grusin, R. (2000) Remediation: Understanding New Media, MIT Press.

The New Aesthetic: A Maieutic of Computationality

Screen testing at main stage for the Republican convention in Tampa, Fla (2012)

Many hasty claims are now being made that the new aesthetic is over, finished, or defunct. I think that as with many of these things we will have to wait and see to the extent to which the new aesthetic is “new”, an “aesthetic”, used in practice, or has any trajectory associated with it. For me, the responses it generates are as interesting as the concept of the new aesthetic itself.

And regarding the “remembering” (perhaps, territorialization) of new media and previous practices, let’s not forget that forgetting things (deteritorialization) can be extremely productive, both theoretically and in everyday practice (as elpis, perhaps, if not as entelechy of new generations). Indeed, forgetting can be like forgiving,[1] and in this sense can allow the absorption or remediation of previous forms (a past bequeathed by the dead) that may have been contradictory or conflictual to be transcended at a higher level (this may also happen through a dialectical move, of course).[2] This is, then, a politics of memory as well as an aesthetic.

But the claim that “NA is that it seems to be all gesture and no ideology” is clearly mistaken. Yes, NA is clearly profoundly gestural and is focused on the practice of doing, in some sense, even if the doing is merely curatorial or collecting other things (as archive/database of the present). The doing is also post-human in that algorithms and their delegated responsibility and control appears to be a returning theme (as the programming industry, as logics of military colonisation of everyday life, as technical mediation, as speed constitutive of absolute past, or as reconstitution of knowledge itself). It is also ideological to the extent that is an attempt to further develop a post-human aesthetic (and of course, inevitably this will/may/should end in failure) but nonetheless reflects in interesting ways a process of cashing out the computational in the realm of the aesthetic – in some senses a maieutic of computational memory, seeing and doing (a “remembering” of glitch ontology or computationality).

As to the charge of the inevitability of historicism to counter the claims of the new aesthetic, one might wish to consider the extent to which the building of the new aesthetic may share the values of computer science (highly ideological, I might add) and which is also profoundly ahistorical and which enables the delegation of the autonomy of the new aesthetic (as code/software) as a computational sphere. But this is not to deny the importance of critical theory here, far from it, but rather it is to raise a question about computation’s immunity to the claims that critical approaches inevitably make – as Ian Bogost recently declared (about a different subject), are these not just “self-described radical leftist academics” and their “predictable critiques”. Could not the new aesthetics form an alliance here with object-oriented ontology?

Within this assemblage, the industrialisation of programming and memory becomes linked to the industrialisation of “seeing” (and here I am thinking of mediatic industries). What I am trying to gesture towards, if only tentatively, is that if the new aesthetic, as an aesthetic of the radically autonomous claims of a highly computational post-digital society, might format the world in ways which profoundly determine, if not offer concrete tendencies, towards an aesthetic which is immune to historicism – in other words the algorithms aren’t listening to the humanists – do we need to follow Stephen Ramsay’s call for Humanists to build?

Here I point to both the industrialisation of memory but also the drive towards a permanent revolution in all forms of knowledge that the computational industries ceaselessly aim towards. That is, the new aesthetic may be a reflexive sighting (the image, the imaginary, the imagined?) and acknowledgement of the mass-produced temporal objects of the programming industries, in as much as they are shared structures, forms, and means, that is, algorithms and codes, that construct new forms of reception in terms that consciousness and collective unconsciousness will increasingly correspond.

Notes

[1] “Forgiving is the only reaction which does not merely re-act but acts anew and unexpectedly, unconditioned by the act which provoked it and therefore freeing from its consequences both the one who forgives and the one who is forgiven” (Hannah Arendt, The Human Condition, page 241), “and if he trespass against thee… and… turn against to thee, saying, I changed my mind; thou shalt release him” (Luke 17: 3-4)
[1] Here I am thinking in terms of Mannheim’s concept of “Generation Entelechy” and “Generation Unit” to consider the ways in which the quicker the tempo of social cultural change, here understood as represented through digital technology, the greater the chances that a particular generation location’s group will react to changed circumstances by producing their own entelechy. 

New Aesthetic Argumentum Ad Hominem

Papercraft Self Portrait – 2009 (Testroete)

One of the most frustrating contemporary ways to attack any new idea, practice or moment is to label it as “buzz-worthy” or an “internet meme”. The weakness of this attack should be obvious, but strangely it has become a powerful way to dismiss things without applying any any critical thought to the content of the object of discussion. In other words it is argumentation petitio principii, where the form of the argument is “the internet meme, the new aesthetic, should be ignored because it is an internet meme”. Or even, in some forms, an argumentum ad hominem, where the attack is aimed at James Bridle (as the originator of the term) rather than the new aesthetic itself. Equally, the attacks may also be combined.

I think the whole ‘internet meme’, ‘buzz’, ‘promotional strategy’ angle on the new aesthetic is indicative of a wider set of worries in relation to a new scepticism, as it were (related also to the skepticism movement too, possibly). We see it on Twitter where the medium of communication seems to encourage a kind of mass scepticism, where everyone makes the same point simultaneous that the other side is blindly following, a ‘fanboy’, irrational, suspect, or somehow beholden to a dark power to close, restrict or tighten individual freedoms – of course, the ‘I’ is smart enough to reject the illusion and unmask the hidden forces. This is also, I think, a worry of being caught out, being laughed at, or distracted by (yet) another internet fad. I also worry that the new aesthetic ‘internet meme’ criticism is particularly ad hominem, usually aimed, as it is, towards its birth within the creative industries. I think we really need to move on from this level of scepticism and be more dialectical in our attitude towards the possibilities in, and suggested by, the new aesthetic. This is where critical theory can be a valuable contributor to the debate.

For example, part of the new aesthetic, is a form of cultural practice which is related to a postmodern and fundamentally paranoid vision of being watched, observed, coded, processed or formatted. I find particularly fascinating the aesthetic dimension to this, in as much as the representational practices are often (but not always) retro, and in some senses, tangential to the physical, cultural, or even computational processes actually associated with such technologies. This is both, I suppose, a distraction, in as much as it misses the target, if we assume that the real can ever be represented accurately (which I don’t), but also and more promisingly an aesthetic that remains firmly human mediated, contra to the claims of those who want to “see like machines”. That is, the new aesthetic is an aestheticization of computational technology and computational techniques more generally. It is also fascinating in terms of the refusal of the new aesthetic to abide by the careful boundary monitoring of art and the ‘creative industry’ more generally, really bringing to the fore the questions raised by Liu, for example, in The Laws of Cool. One might say that it follows the computational propensity towards dissolving of traditional boundaries and disciplinary borders.

I also find the new aesthetic important for it has an inbuilt potentiality towards critical reflexivity, both towards itself (does the new aesthetic exist?) but also towards both artistic practice (is this art?), curation (should this be in galleries?), and technology (what is technology?). There is also, I believe, an interesting utopian kernel to the new aesthetic, in terms of its visions and creations – what we might call the paradigmatic forms – which mark the crossing over of certain important boundaries, such as culture/nature, technology/human, economic/aesthetic and so on. Here I am thinking of the notion of augmented humanity, or humanity 2.0, for example. This criticality is manifested in the new aesthetic’s continual seeking to ‘open up’ black boxes of technology, to look at developments in science, technology and technique and to try to place them within histories and traditions – in the reemergence of social contradictions, for example. But even an autonomous new aesthetic, as it were, points towards the anonymous and universal political and cultural domination represented by computational techniques which are now deeply embedded in systems that we experience in all aspects of our lives. There is much to explore here.

Moroso pixelated sofa and nanimaquina rug, featured on Design Milk

The new aesthetic, of course, is as much symptomatic of a computational world as itself subject to the forces that drive that world. This means that it has every potential to be sold, standardised, and served up to the willing mass of consumers as any other neatly packaged product. Perhaps even more so, with its ease of distribution and reconfiguration within computational systems, such as Twitter and Tumblr. But it doesn’t have to be that way, and so far I have more hope that it even in its impoverished consumerized form, it still serves to serve notice of computational thinking and processes, which stand out then against other logics. This is certainly one of the interesting dimensions to the new aesthetic both in terms of the materiality of computationality, but also in terms of the need to understand the logics of postmodern capitalism, even ones as abstract as obscure computational systems of control.

For me, the very possibility of a self-defined new ‘aesthetic’ enables this potentiality – of course, there are no simple concepts as such, but the new aesthetic, for me, acts as a “bridge” (following Deleuze and Guattari for a moment). By claiming that it is new ‘aesthetic’ makes possible the conceptual resources associated with and materialised in practices, which may need to be “dusted off” and to be used as if they were, in a sense, autonomous (that is, even, uncritical). This decoupling of the concept (no matter that in actuality one might claim that no such decoupling could really have happened) potentially changes the nature of the performances that are facilitated or granted by the space opened within the constellation of concepts around the ‘new aesthetic’ (again, whatever that is) – in a sense this might also render components within the new aesthetic inseparable as the optic of the new aesthetic, like any medium, may change the nature of what can be seen. Again, this is not necessarily a bad thing though.

Glitch Textiles by Phillip David Stearns

Another way of putting it, perhaps, would be that a social ontology is made possible, which, within the terms of the the constellation of practices and concepts grounding it, is both distanced from and placed in opposition to existing and historical practices. Where this is interesting is that, so far, the new aesthetic, as a set of curatorial or collectionist practices, has been deeply recursive in its manifestation – both computational in structure (certainly something I am interested in about it) – and also strikingly visual (so far) – and here the possibility of an immanent critique central to the new aesthetic can be identified, I think. Of course, it is too early to say how far we can push this, especially with something as nascent as the new aesthetic, which is still very much a contested constellation of concepts and ideas and playing out in various media forms, etc., but nonetheless, I suggest that one might still detect the outlines of a kind of mediated non-identity implicit within the new aesthetic, and this makes it interesting. So I am not claiming, in any sense, that the new aesthetic was “founded on critical thinking”, rather that in a similar way that computational processes are not “critical thinking” but contain a certain non-reflexive reflexivity when seen through their recursive strategies – but again this is a potentiality that needs to be uncovered, and not in any sense determined. This is, perhaps, the site of a politics of the new aesthetic.

Certainly there is much work to be done with the new aesthetic, and I, for one, do not think that everything is fixed in aspic – either by Bridle or any of the other commentators. Indeed, there is a need for thinking about the new aesthetic from a number of different perspectives, that for me is the point at which the new aesthetic is interesting for thinking with, and pushing it away seems to me to be an “over-hasty” move when it clearly points to a either a fresh constellations of concepts and ideas, or certainly a means for us to think about the old constellations in a new way. This means that we should not aim to be either for or against the new aesthetic, as such, but rather more interested in the philosophical and political work the new aesthetic makes possible.

New Book: New Aesthetic, New Anxieties

New Aesthetic New Anxieties is the result of a five day Book Sprint organized by Michelle Kasprzak and led by Adam Hyde at V2_ from June 17–21, 2012. Authors: David M. BerryMichel van DartelMichael DieterMichelle KasprzakNat MullerRachel O’Reilly and José Luis de Vicente. Facilitated by: Adam Hyde

You can download the e-book as an EPUB, MOBI, or PDF.

EPUB: http://www.v2.nl/files/new-aesthetic-new-anxieties-epub

MOBI: http://www.v2.nl/files/new-aesthetic-new-anxieties-mobi

PDF: http://www.v2.nl/files/new-aesthetic-new-anxieties-pdf

Annotatable online version: http://www.booki.cc/new-aesthetic-new-anxieties/_draft/_v/1.0/preface/

The New Aesthetic was a design concept and netculture phenomenon launched into the world by London designer James Bridle in 2011. It continues to attract the attention of media art, and throw up associations to a variety of situated practices, including speculative design, net criticism, hacking, free and open source software development, locative media, sustainable hardware and so on. This is how we have considered the New Aesthetic: as an opportunity to rethink the relations between these contexts in the emergent episteme of computationality. There is a desperate need to confront the political pressures of neoliberalism manifested in these infrastructures. Indeed, these are risky, dangerous and problematic times; a period when critique should thrive. But here we need to forge new alliances, invent and discover problems of the common that nevertheless do not eliminate the fundamental differences in this ecology of practices. In this book, perhaps provocatively, we believe a great deal could be learned from the development of the New Aesthetic not only as a mood, but as a topic and fix for collective feeling, that temporarily mobilizes networks. Is it possible to sustain and capture these atmospheres of debate and discussion beyond knee-jerk reactions and opportunistic self-promotion? These are crucial questions that the New Aesthetic invites us to consider, if only to keep a critical network culture in place.

Tagged , , , ,

Taking Care of the New Aesthetic

Strangely, and somewhat unexpectedly, James Bridle unilaterally closed the New Aesthetic Tumblr blog today, 6 May 2012, announcing ‘The New Aesthetic tumblr is now closed’, with some particular and general thanks and very little information about future plans. Perhaps this was always Bridle’s intention as a private project, but one can’t help wonder if the large amount of attention, the move to a public and contested concept, and the loss of control that this entailed may have encouraged a re-assertion of control. If so, this is a great pity and perhaps even an act of vandalism.

Harpa, Iceland  (Berry 2011)

This, then, is a critical turning point, or krisis,[1] for the nascent New Aesthetic movement, and, for me, the blog closure heralds an interesting struggle over what is the New Aesthetic? Who owns or controls it? And in what directions it can now move.? Certainly, I am of the opinion that to have closed the blog in this way insinuates a certain proprietary attitude to the New Aesthetic. Considering that the Tumblr blog has largely been a crowd-sourced project, giving no explanation, allowing no debate, discussion over the closure, etc. makes it look rather like it harvested peoples’ submissions on what could have been a potentially participatory project. Whichever way it is cast, James Bridle looks rather high-handed in light of the many generous and interesting discussions that the New Aesthetic has thrown up across a variety of media.

One of the key questions will be the extent to which this blog was a central locus of, or collection for representing, the New Aesthetic more generally. Personally I found myself less interested in the Tumblr blog that became increasingly irrelevant in light of the high level of discussion found upon ImpericaThe Creators ProjectThe AtlanticCrumb and elsewhere. But there is clearly a need for something beyond the mere writing and rewriting of the New Aesthetic that many of the essays around the topic represented. Indeed, there is a need for an inscription or articulation of the New Aesthetic through multiple forms, both visual and written (not to mention using the sensorium more generally). I hope that we will see a thousand New Aesthetic PinterestTumblr, and PinIt sites bloom across the web.

Urban Cursor is a GPS enabled object (Sebastian Campion 2009)

Nonetheless, it is disappointing to see the number of twitter commentators who have tweeted the equivalent of ‘well, that was that’, as if the single action of an individual is decisive in stifling a new and exciting way of articulating a way of being in the world. Indeed, this blog closure highlights the importance of taking care of the New Aesthetic, especially in its formative stages of development. Whilst there have been a number of dismissive and critical commentaries written about the New Aesthetic, I feel that there is a kernel of something radical and interesting happening and which still remains to be fully articulated, expressed, and made manifest in and through various mediums of expression.

The New Aesthetic blog might be dead, but the New Aesthetic as a way of conceptualising the changes in our everyday life that are made possible in and through digital technology is still unfolding. For me the New Aesthetic was not so much a collection of things as the beginning of a new kind of Archive, an Archive in Motion, which combined what Bernard Stiegler called the Anamnesis (the embodied act of memory as recollection or remembrance) and Hypomnesis (the making-technical of memory through writing, photography, machines, etc.). Stiegler writes,

We have all had the experience of misplacing a memory bearing object – a slip of paper, an annotated book, an agenda, relic or fetish, etc. We discover then that a part of ourselves (like our memory) is outside of us. This material memory, that Hegel named objective, is partial. But it constitutes the most precious part of human memory: therein, the totality of the works of spirit, in all guises and aspects, takes shape (Stiegler n.d.).

Thus, particularly in relation to the affordances given by the networked and social media within which it circulated, combined with a set of nascent practices of collection, archive and display, the New Aesthetic is distinctive in a number of ways. Firstly, it gives a description and a way of representing and mediating the world in and through the digital, that is understandable as an infinite archive (or collection). Secondly, it alternately highlights that something digital is happening in culture – and which we have only barely been conscious of – and the way in which culture is happening to the digital.  Lastly, the New Aesthetic points the direction of travel for the possibility of a Work of Art in the digital age.

In this, the New Aesthetic is something of a pharmakon, in that it is both potentially poison and cure for an age of pattern matching and pattern recognition. In as much as the archive was the set of rules governing the range of what can be verbally, audio-visually or alphanumerically expressed at all, and the database is the grounding cultural logic of software cultures, the New Aesthetic is the cultural eruption of the grammatisation of software logics into everyday life. That is, the New Aesthetic is a deictic moment which sheds light on changes in our lives that imperil things, practices, and engaging human relations, and the desire to make room for such relations, particularly when they are struggling to assert themselves against the dominance of neoliberal governance, bureaucratic structures and market logics.[2]

The New Aesthetic, in other words, brings these patterns to the surface, and in doing so articulates the unseen and little understood logic of computational society and the anxieties that this introduces.

Notes

[1] krisis: a separating, power of distinguishing, decision , choice, election, judgment, dispute.

 

[2] A deictic explanation is here understood as one which articulates a thing or event in its uniqueness. 

 

Bibliography

Stiegler, B. (n.d.)  Anamnesis and Hypomnesis, accessed 06/05/2012, http://arsindustrialis.org/anamnesis-and-hypomnesis

Tagged , , , ,

Glitch Ontology

The digital (or computational) presents us with a number of theoretical and empirical challenges which we can understand within this commonly used set of binaries:
  • Linearity vs Hypertextuality
  • Narrative vs Database
  • Permanent vs Ephemeral
  • Bound vs Unbound
  • Individual vs Social
  • Deep vs Shallow
  • Focused vs Distracted
  • Close Read vs Distant Read
  • Fixed vs Processual
  • Digital (virtual) vs Real (physical)

Understanding the interaction between the digital and physical is part of the heuristic value that these binaries bring to the research activity. However, in relation to the interplay between the digital and the cultural, examples, such as Marquese Scott’s Glitch inspired Dubstep dancing (below), raise important questions about how these binaries interact and are represented in culture more generally (e.g. as notions of The New Aesthetic). 
Glitch inspired Dubstep Dancing (Dancer: Marquese Scott)
Here, I am not interested in critiquing the use of binaries per se (but which of course remains pertinent – and modulations might be a better way to think of digital irruptions), rather I think they are interesting for the indicative light they cast on drawing analytical distinctions between categories and collections related to the digital itself. We can see them as lightweight theories, and as Moretti (2007) argues:
Theories are nets, and we should evaluate them, not as ends in themselves, but for how they concretely change the way we work: for how they allow us to enlarge the… field, and re-design it in a better way, replacing the old, useless distinctions… with new temporal, special, and morphological distinctions (Moretti 2007: 91, original emphasis). 
These binaries can be useful means of thinking through many of the positions and debates that take place within both theoretical and empirical work on mapping the digital.
  1. Linear versus Hypertextuality: The notion of a linear text, usually fixed within a paper form, is one that has been taken for granted within the humanities. Computational systems, however, have challenged this model of reading because of the ease by which linked data can be incorporated into digital text. This has meant that experimentation with textual form and the way in which a reader might negotiate a text can be explored. Of course, the primary model for hypertextual systems is today strongly associated with the worldwide web and HTML, although other systems have been developed.
  2. Narrative versus Database: The importance of narrative as an epistemological frame for understanding has been hugely important in the humanities. Whether as a starting point for beginning an analysis, or through attempts to undermine of problematize narratives within texts, humanities scholars have usually sought to use narrative as an explanatory means of exploring both the literary and history. Computer technology, however, has offered scholars an alternative way of understanding how knowledge might be structured through the notion of the database. This approach personified in the work of Lev Manovich (2001) has been argued to represent an important aspect to digital media, and more importantly the remediation of old media forms in digital systems.
  3. Permanent versus Ephemeral: One of the hallmarks of much ‘traditional’ or ‘basic’ humanities scholarship has been concerned with objects and artifacts that have been relatively stable in relation to digital works. This especially in disciplines that have internalized the medium specificity of a form, for example the book in English Literature, which shifts attention to the content of the medium. In contrast, digital works are notoriously ephemeral in their form, both in the materiality of the substrates (e.g. computer memory chips, magnetic tape/disks, plastic disks, etc.) but also in the plasticity of the form.  This also bears upon the lack of an original from which derivative copies are made, indeed it could be argued that in the digital world there is only the copy (although recent moves in Cloud computing and digital rights management are partial attempts to re-institute the original through technical means).
  4. Bound versus Unbound: A notable feature of digital artifacts is that they tend to be unbound in character. Unlike books, which have clear boundary points marked by the cardboard that makes up the covers, digital objects boundaries are drawn by the file format in which they are encoded. This makes it an extremely permeable border, and one that is made of the same digital code that marks the content. Additionally, digital objects are easily networked and aggregated, processed and transcoded into other forms further problematizing a boundary point.  In terms of reading practices, it can be seen that the permeability of boundaries can radically change the reading experience.
  5. Individual versus Social: traditional humanities has focused strongly on approaches to texts that is broadly individualistic inasmuch as the reader is understood to undertake certain bodily practices (e.g. sitting in a chair, book on knees, concentration on the linear flow of text). Digital technologies, particularly when networked, open these practices up to a much more social experience of reading, with e-readers like the Amazon Kindle encouraging the sharing of highlighted passages, and Tumblr-type blogs and Twitter enabling discussion around and within the digital text.
  6. Deep versus Shallow: Deep reading is the presumed mode of understanding that requires time and attention to develop a hermeneutic reading of a text, this form requires humanistic reading skills to be carefully learned and applied. In contrast a shallow mode is a skimming or surface reading of a text, more akin to gathering a general overview or précis of the text.
  7. Focused versus Distracted: Relatedly, the notion of focused reading is also implicitly understood as an important aspect of humanities scholarship. This is the focus on a particular text, set of texts or canon, and the space and time to give full attention to them. By contrast, in a world of real-time information and multiple windows on computer screens, reading practices are increasingly distracted, partial and fragmented (hyperattention).
  8. Close Reading versus Distant Reading: Distant reading is the application of technologies to enable a great number of texts to be incorporated into an analysis through the ability of computers to process large quantities of text relatively quickly. Moretti (2007) has argued that this approach allows us to see social and cultural forces at work through collective cultural systems.
  9. Fixed versus Processual: The digital medium facilitates new ways of presenting media that are highly computational, this raises new challenges for scholarship into new media and the methods for approaching these mediums. It also raises questions for older humanities that are increasingly accessing their research object through the mediation of processural computational systems, and more particularly through software and computer code.
  10. Real (physical) versus Digital (virtual): This is a common dichotomy that draws some form of dividing line between the so-called real and the so-called digital. 
The New Aesthetic ‘pixel’ fashion 

I am outlining these binaries because I think they are useful for helping us to draw the contours of what I call elsewhere ‘computationality’, and for its relationship to the New Aesthetic. In order to move beyond a ‘technological sublime’, we should begin the theoretical and empirical projects through the development of ‘cognitive maps’ (Jameson 1990). Additionally, as the digital increasingly structures the contemporary world, curiously, it also withdraws, and becomes harder and harder for us to focus on as it is embedded, hidden, off-shored or merely forgotten about. Part of the challenge is to bring the digital (code/software) back into visibility for research and critique.

The New Aesthetic is a means for showing how the digital surfaces in a number of different places and contexts.  It is not purely digital production or output, it can also be the concepts and frameworks of digital that are represented (e.g. Voxels). Although New Aesthetic has tended to highlight 8-bit visuals and ‘sensor-vernacular’ or ‘seeing like a machine’ (e.g. Bridle/Sterling) I believe there is more to be explored in terms of ‘computationality’. When identified as such the ‘New Aesthetic’ is a useful concept, in relation to being able to think through and about the visual representation of computationality. Or better, to re-present the computational more generally and its relationship to a particular way-of-being in the world and its mediation through technical media (here specifically concerned with computational media).
Preen Spring/Summer 2012 | Source: Style.com

Previously I argued that this New Aesthetic is a form of ‘abduction aesthetic’ linked to the emergence of computationality as an ontotheology. Computationality is here understood as a specific historical epoch defined by a certain set of computational knowledges, practices, methods and categories. Abductive aesthetic (or pattern aesthetic) is linked by a notion of computational patterns and pattern recognition as a means of cultural expression. I argue that we should think about software/code through a notion of computationality as an ontotheology. Computationality (as an ontotheology) creates a new ontological ‘epoch’ as a new historical constellation of intelligibility. In other words, code/software is the paradigmatic case of computationality, and presents us with a research object which is located at all major junctures of modern society and is therefore unique in enabling us to understand the present situation – as a collection, network, or assemblage of ‘coded objects’ or ‘code objects’.

Computationality is distinct from the ‘challenging-forth’ of technicity as Heidegger described it – in contrast computationality has a mode of revealing that is a ‘streaming-forth’. One aspect of this is that streaming-forth generates second-order information and data to maintain a world which is itself seen and understood as flow but drawn from a universe which is increasingly understood as object-oriented and discrete. Collected information is processed, feedback is part of the ecology of computationality. Computational devices not only withdraw – indeed mechanical devices such as car engines clearly also withdraw – computational devices both withdraw and are constantly pressing to be present-at-hand in alternation. This I call a form of glitch ontology.
Technicity
(modern technology)
Computationality (postmodern technology)
Mode of Revealing
Challenging-forth (Gestell)
Streaming-forth
Paradigmatic Equipment
Technical devices, machines.
Computational devices, computers, processors.
Goals (projects)
1. Unlocking, transforming, storing, distributing, and switching about Standing Reserve (Bestand).
2. Efficiency.
1. Trajectories,  Processing information, Algorithmic transformation (aggregation, reduction, calculation), as data reserve (Cloudscape).
2. Computability.
Identities (roles)
Ordering-beings
Streaming-beings
Paradigmatic Epistemology
Engineer: Engineering is exploiting basic mechanical principles to develop useful tools and objects. For example using: Time-motion studies, Methods-Time Measurement (MTM), instrumental rationality.

– Subtractive Logic (processed materials from world yield resources)
Design: Design is the construction of an object or a system but not just what it looks like and feels like. Design is how it works and the experience it generates. For example using: Information theory, graph theory,  data visualisation, communicative rationality, real-time streams

– Additive Logic (processed data is supplement to world)
Table 1: Technicity vs Computationality
Computational devices appear to oscillate rapidly between Vorhandenheit/Zuhandenheit (present-at-hand/ready-to-hand) – a glitch ontology. Or perhaps better, constantly becoming ready-to-hand/unready-to-hand in quick alternation. And by quick this can be happening in microseconds, milliseconds, or seconds, repeatedly in quick succession. This aspect of breakdown has been acknowledged as an issue within human-computer design and is seen as one of pressing concern to be ‘fixed’ or made invisible to the computational device user (Winograd and Flores 1987).
The oscillation creates the ‘glitch’ that is a specific feature of computation as opposed to other technical forms (Berry 2011). This is the glitch that creates the conspicuousness that breaks the everyday experience of things, and more importantly breaks the flow of things being comfortably at hand. This is a form that Heidegger called Unreadyness-to-hand (Unzuhandenheit). Heidegger defines three forms of unreadyness-to-hand: Obtrusiveness (Aufdringlichkeit), Obstinacy (Aufsässigkeit), and Conspicuousness (Auffälligkeit), where the first two are non-functioning equipment and the latter is equipment that is not functioning at its best (see Heidegger 1978, fn 1). In other words, if equipment breaks you have to think about it.

It is important to note that conspicuousness is not completely broken-down equipment. Conspicuousness, then, ‘presents the available equipment as in a certain unavailableness’ (Heidegger 1978: 102–3), so that as Dreyfus (2001: 71) explains, we are momentarily startled, and then shift to a new way of coping, but which, if help is given quickly or the situation is resolved, then ‘transparent circumspective behaviour can be so quickly and easily restored that no new stance on the part of Dasein is required’ (Dreyfus 2001: 72). As Heidegger puts it, it requires ‘a more precise kind of circumspection, such as “inspecting”, checking up on what has been attained, [etc.]’ (Dreyfus 2001: 70).

In other words computation, due to its glitch ontology, continually forces a contextual slowing-down at the level of the mode of being of the user, thus the continuity of flow or practice is interrupted by minute pauses and breaks (these may beyond conscious perception, as such). This is not to say that analogue technologies do not break down, the difference is the conspicuousness of digital technologies in their everyday working, in contrast to the obstinacy or obtrusiveness of analogue technologies, which tend to work or not. I am also drawing attention to the discrete granularity of the conspicuousness of digital technologies, which can be measured technically as seconds, milliseconds, or even microseconds. This glitch ontology raises interesting questions in relation to basic questions about our experiences of computational systems.

My interest in the specificity of the New Aesthetic is because of its implicit recognition of the extent to which digital media has permeated our everyday lives. We could perhaps say that the New Aesthetic is a form of ‘breakdown’ art linked to the conspicuousness of digital technologies. Not just the use of digital tools, of course, but also a language of new media (as Manovich would say), the frameworks, structures, concepts and processes represented by computation. That is both the presentation of computation and its representational modes. It is also to the extent both that it represents computation, but also draws attention to this glitch ontology, for example through the representation of the conspicuousness of glitches and other digital artefacts (also see Menkman 2010, for a notion of critical media aesthetics and the idea of glitch studies).
Other researchers (Beaulieu et al 2012) have referred to ‘Network Realism’ to draw attention to some of these visual practices. Particularly the way of producing these networked visualisation. However, the New Aesthetic is interesting in remaining focussed on the aesthetic in the first instance (rather than the sociological, etc.). This is useful in order to examine the emerging visual culture, but also to try to discern aesthetic forms instantiated within it.
As I argued previously, the New Aesthetic is perhaps the beginning of a new kind of Archive, an Archive in Motion – what Bernard Stiegler (n.d.) called the Anamnesis (the embodied act of memory as recollection or remembrance) combined with Hypomnesis (the making-technical of memory through writing, photography, machines, etc.). Thus, particularly in relation to the affordances given by the networked and social media within which it circulates, combined with a set of nascent practices of collection, archive and display, the New Aesthetic is distinctive in a number of ways.
Firstly, it gives a description and a way of representing and mediating the world in and through the digital, that is understandable as an infinite archive (or collection). Secondly, it alternately highlights that something digital is a happening in culture – and which we have only barely been conscious of – and the way in which culture is happening to the digital. Lastly, the New Aesthetic points the direction of travel for the possibility of a Work of Art in the digital age – something Heidegger thought impossible under the conditions of technicity, but remains open, perhaps under computationality.
In this, the New Aesthetic is, however, a pharmakon, in that it is both potentially poison and cure for an age of pattern matching and pattern recognition. If the archive was the set of rules governing the range of expression following Foucault, and the database the grounding cultural logic of software cultures following Manovich, we might conclude that the New Aesthetic is the cultural eruption of the grammatisation of software logics into everyday life. The New Aesthetic under a symptomology, can be seen surfacing computational patterns, and in doing so articulates and re-presents the unseen and little understood logic of computation, which lies like plasma under, over, and in the interstices between the modular elements of an increasingly computational society. 
Bibliography
Beaulieu, A. and de Rijcke, S. (2012) Network Realism, accessed 20/05/2012, http://networkrealism.wordpress.com/

Dreyfus, H. (2001) Being-in-the-world: A Commentary on Heidegger’s Being and Time, Division I. USA: MIT Press.

Heidegger, M. (1978) Being and Time. London: Wiley–Blackwell.

Jameson, F. (2006) Postmodernism or the Cultural Logic of Late Capitalism, in Kellner, D. Durham, M. G. (eds.) Media and Cultural Studies Keyworks, London: Blackwell.

Manovich, L. (2001) The Language of New Media. London: MIT Press.

Menkman, R. (2010) Glitch Studies Manifesto, accessed 20/5/2012, http://rosa-menkman.blogspot.com/2010/02/glitch-studies-manifesto.html

Moretti, F. (2007) Graphs, Maps, Trees: Abstract Models for a Literary History, London, Verso.
Stiegler, B. (n.d.)  Anamnesis and Hypomnesis, accessed 06/05/2012, http://arsindustrialis.org/anamnesis-and-hypomnesis

Winograd, T. and Flores, F. (1987) Understanding Computers and Cognition: A New Foundation for Design, London: Addison Wesley.

Advertisements