Category Archives: aesthetics

Postdigital Aesthetics: Art, Computation and Design

Edited by David M. Berry and Michael Dieter.  

Advertisements

Flat Theory

The world is flat.[1] Or perhaps better, the world is increasingly “layers”. Certainly the augmediated imaginaries of the major technology companies are now structured around a post-retina notion of mediation made possible and informed by the digital transformations ushered in by mobile technologies that provide a sense of place, as well as a sense of management of complex real-time streams of information and data.

Two new competing computational interface paradigms are now deployed in the latest version of Apple and Google’s operating systems, but more notably as regulatory structures to guide the design and strategy related to corporate policy. The first is “flat design” which has been introduced by Apple through iOS 8 and OS X Yosemite as a refresh of the ageing operating systems’ human computer interface guidelines, essentially stripping the operating system of historical baggage related to techniques of design that disguised the limitations of a previous generation of technology, both in terms of screen but also processor capacity. It is important to note, however, that Apple avoids talking about “flat design” as its design methodology, preferring to talk through its platforms specificity, that is about iOS’ design or OS X’s design. The second is “material design” which was introduced by Google into its Android L, now Lollipop, operating system and which also sought to bring some sense of coherence to a multiplicity of Android devices, interfaces, OEMs and design strategies. More generally “flat design” is “the term given to the style of design in which elements lose any type of stylistic characters that make them appear as though they lift off the page” (Turner 2014). As Apple argues, one should “reconsider visual indicators of physicality and realism” and think of the user interface as “play[ing] a supporting role”, that is that techniques of mediation through the user interface should aim to provide a new kind of computational realism that presents “content” as ontologically prior to, or separate from its container in the interface (Apple 2014). This is in contrast to “rich design,” which has been described as “adding design ornaments such as bevels, reflections, drop shadows, and gradients” (Turner 2014).

I want to explore these two main paradigms – and to a lesser extent the flat-design methodology represented in Windows 7/8 and the, since renamed, Metro interface (now Microsoft Modern UI) – through a notion of a comprehensive attempt by both Apple and Google to produce a rich and diverse umwelt, or ecology, linked through what what Apple calls “aesthetic integrity” (Apple 2014). This is both a response to their growing landscape of devices, platforms, systems, apps and policies, but also to provide some sense of operational strategy in relation to computational imaginaries. Essentially, both approaches share an axiomatic approach to conceptualising the building of a system of thought, in other words, a primitivist predisposition which draws from both a neo-Euclidian model of geons (for Apple), but also a notion of intrinsic value or neo-materialist formulations of essential characteristics (for Google). That is, they encapsulate a version of what I am calling here flat theory. Both of these companies are trying to deal with the problematic of multiplicities in computation, and the requirement that multiple data streams, notifications and practices have to be combined and managed within the limited geography of the screen. In other words, both approaches attempt to create what we might call aggregate interfaces by combining techniques of layout, montage and collage onto computational surfaces (Berry 2014: 70).

The “flat turn” has not happened in a vacuum, however, and is the result of a new generation of computational hardware, smart silicon design and retina screen technologies. This was driven in large part by the mobile device revolution which has not only transformed the taken-for-granted assumptions of historical computer interface design paradigms (e.g. WIMP) but also the subject position of the user, particularly structured through the Xerox/Apple notion of single-click functional design of the interface. Indeed, one of the striking features of the new paradigm of flat design, is that it is a design philosophy about multiplicity and multi-event. The flat turn is therefore about modulation, not about enclosure, as such, indeed it is a truly processual form that constantly shifts and changes, and in many ways acts as a signpost for the future interfaces of real-time algorithmic and adaptive surfaces and experiences. The structure of control for the flat design interfaces is following that of the control society, is “short-term and [with] rapid rates of turnover, but also continuous and without limit” (Deleuze 1992). To paraphrase Deleuze: Humans are no longer in enclosures, certainly, but everywhere humans are in layers.

Apple uses a series of concepts to link its notion of flat design which include, aesthetic integrity, consistency, direct manipulation, feedback, metaphors, and user control (Apple 2014). Reinforcing the haptic experience of this new flat user interface has been described as building on the experience of “touching glass” to develop the “first post-Retina (Display) UI (user interface)” (Cava 2013). This is the notion of layered transparency, or better, layers of glass upon which the interface elements are painted through a logical internal structure of Z-axis layers. This laminate structure enables meaning to be conveyed through the organisation of the Z-axis, both in terms of content, but also to place it within a process or the user interface system itself.

Google, similarly, has reorganised it computational imaginary around a flattened layered paradigm of representation through the notion of material design. Matias Duarte, Google’s Vice President of Design and a Chilean computer interface designer, declared that this approach uses the notion that it “is a sufficiently advanced form of paper as to be indistinguishable from magic” (Bohn 2014). But magic which has constraints and affordances built into it, “if there were no constraints, it’s not design — it’s art” Google claims (see Interactive Material Design) (Bohn 2014). Indeed, Google argues that the “material metaphor is the unifying theory of a rationalized space and a system of motion”, further arguing:

The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space and in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts… Motion respects and reinforces the user as the prime mover… [and together] They create hierarchy, meaning, and focus (Google 2014). 

This notion of materiality is a weird materiality in as much as Google “steadfastly refuse to name the new fictional material, a decision that simultaneously gives them more flexibility and adds a level of metaphysical mysticism to the substance. That’s also important because while this material follows some physical rules, it doesn’t create the “trap” of skeuomorphism. The material isn’t a one-to-one imitation of physical paper, but instead it’s ‘magical'” (Bohn 2014). Google emphasises this connection, arguing that “in material design, every pixel drawn by an application resides on a sheet of paper. Paper has a flat background color and can be sized to serve a variety of purposes. A typical layout is composed of multiple sheets of paper” (Google Layout, 2014). The stress on material affordances, paper for Google and glass for Apple are crucial to understanding their respective stances in relation to flat design philosophy.[2]

Glass (Apple): Translucency, transparency, opaqueness, limpidity and pellucidity. 

Paper (Google): Opaque, cards, slides, surfaces, tangibility, texture, lighted, casting shadows. 

Paradigmatic Substances for Materiality

In contrast to the layers of glass that inform the logics of transparency, opaqueness and translucency of Apple’s flat design, Google uses the notion of remediated “paper” as a digital material, that is this “material environment is a 3D space, which means all objects have x, y, and z dimensions. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the viewer. Every sheet of material occupies a single position along the z-axis and has a standard 1dp thickness” (Google 2014). One might think then of Apple as painting on layers of glass, and Google as thin paper objects (material) placed upon background paper. However a key difference lies in the use of light and shadow in Google’s notion which enables the light source, located in a similar position to the user of the interface, to cast shadows of the material objects onto the objects and sheets of paper that lie beneath them (see Jitkoff 2014). Nonetheless, a laminate structure is key to the representational grammar that constitutes both of these platforms.

Armin Hofmann, head of the graphic design department at the Schule für Gestaltung Basel (Basel School of Design) and was instrumental in developing the graphic design style known as  the Swiss Style. Designs from 1958 and 1959. 

Interestingly, both design strategies emerge from an engagement with and reconfiguration of the principles of design that draw from the Swiss style (sometimes called the International Typographic Style) in design (Ashghar 2014, Turner 2014).[3] This approach emerged in the 1940s, and

mainly focused on the use of grids, sans-serif typography, and clean hierarchy of content and layout. During the 40’s and 50’s, Swiss design often included a combination of a very large photograph with simple and minimal typography (Turner 2014).

The design grammar of the Swiss style has been combined with minimalism and the principle of “responsive design”, that is that the materiality and specificity of the device should be responsive to the interface and context being displayed. Minimalism is a “term used in the 20th century, in particular from the 1960s, to describe a style characterized by an impersonal austerity, plain geometric configurations and industrially processed materials” (MoMA 2014). Robert Morris, one of the principle artists of Minimalism, and author of the influential Notes on Sculpture used “simple, regular and irregular polyhedrons. Influenced by theories in psychology and phenomenology” which he argued “established in the mind of the beholder ‘strong gestalt sensation’, whereby form and shape could be grasped intuitively” (MoMA 2014).[4]

Robert Morris: Untitled (Scatter Piece), 1968-69, felt, steel, lead, zinc, copper, aluminum, brass, dimensions variable; at Leo Castelli Gallery, New York. Photo Genevieve Hanson. All works this article © 2010 Robert Morris/Artists Rights Society (ARS), New York.

The implications of these two competing world-views are far-reaching in that much of the worlds initial contact, or touch points, for data services, real-time streams and computational power is increasingly through the platforms controlled by these two companies. However, they are also deeply influential across the programming industries, and we see alternatives and multiple reconfigurations in relation to the challenge raised by the “flattened” design paradigms. That is, they both represent, if only in potentia, a situation of a power relation and through this an ideological veneer on computation more generally. Further, with the proliferation of computational devices – and the screenic imaginary associated with them in the contemporary computational condition – there appears a new logic which lies behind, justifies and legitimates these design methodologies.

It seems to me that these new flat design philosophies, in the broad sense, produce an order in precepts and concepts in order to give meaning and purpose not only in the interactions with computational platforms, but also more widely in terms of everyday life. Flat design and material design are competing philosophies that offer alternative patterns of both creation and interpretation, which are meant to have not only interface design implications, but more broadly in the ordering of concepts and ideas, the practices and the experience of computational technologies broadly conceived. Another way to put this could be to think about these moves as being a computational founding, the generation of, or argument for, an axial framework for building, reconfiguration and preservation.

Indeed, flat design provides and more importantly serves, as a translational or metaphorical heuristic for both re-presenting the computational, but also teaches consumers and users how to use and manipulate new complex computational systems and stacks. In other words, in a striking visual technique flat design communicates the vertical structure of the computational stack, on which the Stack corporations are themselves constituted. But also begins to move beyond the specificity of the device as privileged site of a computational interface interaction from beginning to end. For example, interface techniques are abstracted away from the specificity of the device, for example through Apple’s “handoff” continuity framework which also potentially changes reading and writing practices in interesting ways.

These new interface paradigms, introduced by the flat turn, have very interesting possibilities for the application of interface criticism, through unpacking and exploring the major trends and practices of the Stacks, that is, the major technology companies. I think that further than this, the notion of layers are instrumental in mediating the experience of an increasingly algorithmic society (e.g. think dashboards, personal information systems, quantified self, etc.), and as such provide an interpretative frame for a world of computational patterns but also a constituting grammar for building these systems in the first place. There is an element in which the notion of the postdigital may also be a useful way into thinking about the question of the link between art, computation and design given here (see Berry and Dieter, forthcoming) but also the importance of notions of materiality for the conceptualisation deployed by designers working within both the flat design and material design paradigms – whether of paper, glass, or some other “material” substance.[5]

Notes

[1] Many thanks to Michael Dieter and Søren Pold for the discussion which inspired this post. 
[2] The choice of paper and glass as the founding metaphors for the flat design philosophies of Google and Apple raise interesting questions for the way in which these companies articulate the remediation of other media forms, such as books, magazines, newspapers, music, television and film, etc. Indeed, the very idea of “publication” and the material carrier for the notion of publication is informed by the materiality, even if only a notional affordance given by this conceptualisation. It would be interesting to see how the book is remediated through each of the design philosophies that inform both companies, for example. 
[3] One is struck by the posters produced in the Swiss style which date to the 1950s and 60s but which today remind one of the mobile device screens of the 21st Century. 
[4] There is also some interesting links to be explored between the Superflat style and postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime, both in terms of the aesthetic but also in relation to the cultural moment in which “flatness” is linked to “shallow emptiness”.
[5] There is some interesting work to be done in thinking about the non-visual aspects of flat theory, such as the increasing use of APIs, such as the RESTful api, but also sound interfaces that use “flat” sound to indicate spatiality in terms of interface or interaction design.  

Bibliography

Apple (2014) iOS Human Interface Guidelines, accessed 13/11/2014, https://developer.apple.com/library/ios/documentation/userexperience/conceptual/mobilehig/Navigation.html

Ashghar, T. (2014) The True History Of Flat Design, accessed 13/11/2014, http://www.webdesignai.com/flat-design-history/

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury.

Berry, D. M. and Dieter, M. (forthcoming) Postdigital Aesthetics: Art, Computation and Design, Basingstoke: Palgrave Macmillan.

Bohn, D. (2014) Material world: how Google discovered what software is made of, The Verge, accessed 13/11/2014, http://www.theverge.com/2014/6/27/5849272/material-world-how-google-discovered-what-software-is-made-of

Cava, M. D. (2013) Jony Ive: The man behind Apple’s magic curtain, USA Today, accessed 1/1/2014, http://www.usatoday.com/story/tech/2013/09/19/apple-jony-ive-craig-federighi/2834575/

Deleuze, G. (1992) Postscript on the Societies of Control, October, vol. 59: 3-7.

Google (2014) Material Design, accessed 13/11/2014, http://www.google.com/design/spec/material-design/introduction.html

Google Layout (2014) Principles, Google, accessed 13/11/2014, http://www.google.com/design/spec/layout/principles.html

Jitkoff, N. (2014) This is Material Design, Google Developers Blog, accessed 13/11/2014,  http://googledevelopers.blogspot.de/2014/06/this-is-material-design.html

MoMA (2014) Minimalism, MoMA, accessed 13/11/2014, http://www.moma.org/collection/details.php?theme_id=10459

Turner, A. L. (2014) The history of flat design: How efficiency and minimalism turned the digital world flat, The Next Web, accessed 13/11/2014, http://thenextweb.com/dd/2014/03/19/history-flat-design-efficiency-minimalism-made-digital-world-flat/

Interview with David M. Berry at re:publica 2013

Open science interview at re:publica conference in Berlin, 2013, by Kaja Scheliga.

Kaja ScheligaSo to start off…what is your field, what do you do?


David M. Berry: My field is broadly conceived as digital humanities or software studies. I focus in particular on critical approaches to understanding technology, through theoretical and philosophical work, so, for example, I have written a book called Philosophy of Software and I have a new book called Critical Theory and The Digital but I am also interested in the multiplicity of practices within computational culture as well, and the way the digital plays out in a political economic context.

KS: Today, here at the re:publica you talked about digital humanities. What do you associate with the term open science?

DB: Well, open science has very large resonances with Isaiah Berlin’s notion of the open society, and I think the notion of open itself is interesting in that kind of construction, because it implies a “good”. To talk about open science implies firstly that closed science is “bad”, that science should be somehow widely available, that everything is published and there is essentially a public involvement in science. It has a lot of resonances, not necessarily clear. It is a cloudy concept. 

KS: So where do you see the boundary between open science and digital humanities? Do they overlap or are they two separate fields? Is one part of the other?


DB: Yes, I think, as I was talking in the previous talk about how digital humanities should be understood within a constellation, I think open science should also be understood in that way. There is no single concept as such, and we can bring up a lot of different definitions, and practitioners would use it in multiple ways depending on their fields. But I think, there is a kind of commitment towards open access, the notion of some kind of responsibility to a public, the idea that you can have access to data and to methodologies, and that it is published in a format that other people have access to, and also there is a certain democratic value that is implicit in all of these constructions of the open: open society, open access, open science, etc. And that is really linked to a notion of a kind of liberalism that the public has a right, and indeed has a need to understand.  And to understand in order to be the kind of citizen that can make decisions themselves about science. So in many ways it is a legitimate discourse, it is a linked and legitimating discourse about science itself, and it is a way of presenting science as having a value to society.

KS:  But is that justified, do you agree with this concept? Or do you rather look at it critically?

DB: Well, I am a critical theorist. So, for me these kinds of concepts are never finished. They always have within them embedded certain kinds of values and certain kinds of positions. And so for me it is an interesting concept and I think “open science” is interesting in that it emerges at a certain historical juncture, and of course with the notion of a “digital age” and all the things that have been talked about here at the re:publica, everyone is so happy and so progressive and the future looks so bright – apparently…

KS: Does it?

DB: Yes, well, from the conference perspective, because re:publica is a technology conference, there is this whole discourse of progress – which is kind of an American techno-utopian vision that is really odd in a European context – for me anyway. So, being a critical theorist, it does not necessarily mean that I want to dismiss the concept, but I think it is interesting to unpick the concept and see how it plays out in various ways. In some ways it can be very good, it can be very productive, it can be very democratic, in other ways it can be used for example as a certain legitimating tool to get funding for certain kinds of projects, which means other projects, which are labelled “closed”, are no longer able to get funded. So, it is a complex concept, it is necessarily “good” or “bad”.

KS: So, not saying ‘good’ or ‘bad’, but looking at the dark side of say openness, where do you see the limits? Or where do you see problem zones?

DB: Well, again, to talk about the “dark side,” it is kind of like Star Wars or something. We have to be very careful with that framework, because the moment you start talking about the dark side of the digital, which is a current, big discussion going on, for example, in the dark side of the digital humanities, I think it is a bit problematic. That is why thinking in terms of critique is a much better way to move forward. So for me, what would be more interesting would be to look at the actual practices of how open science is used and deployed. Which practitioners are using it? Which groups align themselves with it? Which policy documents? And which government policies are justified by rolling back to open science itself? And then, it is important to perform a kind of genealogy of the concept of “open science” itself. Where does it come from? What is it borrowing from? Where is the discussion over that term? Why did we come to this term being utilised in this way? And I think that then shows us the force of a particular term, and places it within an historical context. Because open science ten years ago may have meant one thing, but open science today might mean something different. So, it is very important we ask these questions.

KS: All right. And are there any open science projects that come to mind, spontaneously, right now?


DB: I’m not sure they would brand themselves as “open science” but I think CERN would be for me a massive open science project, and which likes to promote itself in these kinds of ways. So, the idea of a public good, publishing their data, having a lot of cool things on their website the public can look at, but ultimately, that justification for open science is disconnected because, well, what is the point of finding the Higgs Boson, what is the actual point, where will it go, what will it do? And that question never gets asked because it is open science, so the good of open science makes it hard for us to ask these other kinds of questions. So, those are the kinds of issues that I think are really important. And it is also interesting in terms of, for example, there was an American version of CERN which was cancelled. So why was CERN built, how did open science enable that? I mean, we are talking huge amounts of money, large amounts of effort, would this money have been better transferred to solving the problem of unemployment, you know, we are in a fiscal crisis at the moment, a financial catastrophe and these kinds of questions get lost because open science itself gets divorced from its political economic context.

KS: Yes. But interesting that you say that within open science certain questions are maybe not that welcome, so actually, it seems to be at certain places still pretty closed, right?

DB: Well, that is right, open itself is a way of closing down other kinds of debates. So, for example, in the programming world open source was promoted in order not to have a discussion about free software, because free software was just too politicised for many people. So using the term open, it was a nice woolly term that meant everything to a lot of different people, did not feel political and therefore could be promoted to certain actors, many governments, but also corporations. And people sign up to open source because it just sounds – “open source, yes, who is not for open source?” I think if you were to ask anyone here you would struggle to find anybody against open source. But if you ask them if they are for free software a lot of people would not know what it is. That concept has been pushed away. I think the same thing happens in science by these kinds of legitimating discourses. Certain kinds of critical approaches get closed down. I think you would not be welcomed if at the CERN press conference for the Higgs boson you would put up your hand and ask: “well actually, would it not have been better spending this money on solving poverty?” That would immediately not be welcomed as a legitimate line of questioning.  

KS: Yes, right. Okay, so do you think science is already open, or do we need my openness? And if so, where?

DB: Well, again, that is a strange question that assumes that I know what “open” is. I mean openness is a concept that changes over time. I think that the project of science clearly benefits from its ability to be critiqued and checked, and I do not necessarily just want to have a Popperian notion of science here – it is not just about falsification – but I think verification and the ability to check numbers is hugely important to the progress of science. So that dimension is a traditional value of science, and very important that it does not get lost. Whether or not rebranding it as open science helps us is not so straightforward. I am not sure that this concept does much for us, really. Surely it is just science? And approaches that are defined as “closed” are perhaps being defined as non-science.

KS: What has the internet changed about science and working in research?

DB: Well, I am not a scientist, so –   

KS: – as in science, as in academia. Or, what has the internet changed in research?

DB: Well, this is an interesting question. Without being too philosophical about it I hope, Heidegger was talking about the fact that science was not science anymore, and actually technology had massively altered what science was. Because science now is about using mechanisms, tools, digital devices, and computers, in order to undertake the kinds of science that is possible. So it becomes this entirely technologically driven activity. Also, today science has become much more firmly located within economic discourse, so science needs to be justified in terms of economic output, for example. It is not just the internet and the digital that have introduced this, there are larger structural conditions that I think that are part of this. So, what has the Internet or the web changed about science? One thing is allowing certain kinds of scientism to be performed in public. And so you see this playing out in particular ways, certain movements – really strange movements – have emerged that are pro-science and they just seek to attack people they see as anti-science. So, for example, the polemical atheist movement led by Richard Dawkins argues that that it is pro-science and anyone who is against it is literally against science – they are anti-science. This is a very strange way of conceptualising science. And some scientists I think are very uncomfortable with the way Dawkins is using rhetoric, not science, to actually enforce and justify his arguments. And another example is the “skeptics” movement, another very “pro-science” movement that has very fixed ideas about what science is. So science becomes a very strong, almost political philosophy, a scientism. I am interested in exploring how digital technologies facilitate a technocratic way of thinking: a certain kind of instrumental rationality, as it were.

KS: How open is your research, how open is your work? Do you share your work in progress with your colleagues?

DB: Well, as an academic, sharing knowledge is a natural way of working – we are very collaborative, go to conferences, present new work all the time, and publish in a variety of different venues. In any case, your ability to be promoted as an academic, to become a professor, is based on publishing, which means putting work out there in the public sphere which is then assessed by your colleagues. So the very principles of academia are about publishing, peer review, and so on and so forth. So, we just have to be a bit careful about the framing of the question in terms of: “how ‘open’ is your work?”, because I am not sure how useful that question is inasmuch as it is too embedded within certain kinds of rhetorics that I am a little bit uncomfortable with. So the academic pursuit is very much about sharing knowledge – but also knowledge being shared.

KS: Okay. I was referring to, of course, when you do work and when you have completed your research you want to share it with others because that is the point of doing the research in the first place, to find something out and then to tell the world look this is what I found out, right?

DB: Possibly. No.

KS: No?

DB: This is what I am saying. I mean –

KS: I mean, of course in a simplified way.

DB: Well, disciplines are not there to “tell the world”. Disciplines are there to do research and to create research cultures. What is the point of telling the world? The world is not necessarily very interested. And so you have multiple publics – which is one way of thinking about it. So one of my publics, if you like, is my discipline, and cognate disciplines, and then broader publics like re:publica and then maybe the general public. And there are different ways of engaging with those different audiences. If I was a theoretical physicist for example, and I publish in complex mathematical formulae,  I can put that on the web but you are not really going to get an engagement from a public as such. That will need to be translated. And therefore maybe you might write a newspaper article which translates that research for a different public. So, I think it is not about just throwing stuff on the web or what have you. I think that would be overly simplistic. It is also about translation. So do I translate my research? Well I am doing it now. I do it all the time. So, I talk to Ph.D. students and graduates, that is part of the dissemination of information, which is, I think really what you are getting at. How do you disseminate knowledge?

KS: Exactly. And knowledge referring not only to knowledge that is kind of settled and finished, you know, I have come to this conclusion, this is what I am sharing, but also knowledge that is in the making, in the process, that was what I was referring to.

DB: Sure, yes. I mean, good academics do this all the time. And I am talking particularly about academia here. I think good academics do research and then they are teaching and of course these two things overlap in very interesting ways. So if you are very lucky to have a good scholar as a professor you are going to benefit from seeing knowledge in the making. So that is a more general question about academic knowledge and education. But the question of knowledges for publics, I think that is a different question and it is very, very complex and you need to pin down what it is you want to happen there. In Britain we have this notion of the public engagement of science and that is about translation. Let’s say you do a big research project that is very esoteric or difficult to understand, and then you write a popular version of it – Stephen Hawking is a good example of this – he writes books that people can read and this has major effects beyond science and academia itself. I think this is hugely important, both in terms of understanding how science is translated, but also how popular versions of science may not themselves be science per se.

KS: So, what online tools do use for your research?

DB: What online tools? I do not use many online tools as such. I mean I am in many ways quite a traditional scholar, I rely on books – I will just show you my notes. I take notes in a paper journal and I write with a fountain pen which I think is a very traditional way of working. The point is that my “tools” are non-digital, I hardly ever digitise my notes and I think it is interesting to go through the medium of paper to think about the digital, because digital tools seem to offer us solutions and we are very caught up in the idea that the digital provides answers. I think we have to pause a little bit, and paper forces you to slow down – that is why I like it. It is this slowing down that I think is really important when undertaking research, giving time to think by virtue of making knowledge embodied. Obviously, when it comes to collecting data and following debates I will use digital tools. Google of course is one of the most important, Google scholar and social media are really interesting tools, Gephi is very interesting social network analysis tool. I use Word and Excel as does pretty much everybody else. So the important issue is choosing which digital tools to use in which contexts. One thing I do much less of is, for example, the kind of programming were people write APIs and scrapers and all this kind of approaches, I have been involved in some projects doing that but I just do not have time to construct those tools, so I sometimes other people’s software (such as digital methods tools).

Notes, reproduced in Lewandowska and Ptak (2013)


KS: Okay, and how about organising ideas, do you do that on paper? Or for example do you use a tool for task managing?

DB: Always paper. If you have a look in my journal you can see that I can choose any page and there is an organisation of ideas going on here. For me it is a richer way to work through ideas and concepts  Eventually, you do have to move to another medium – you know I do not type my books on typewriters! – I use a word processor, for example. So eventually I do work on a computer, but by that point I think the structure is pretty much in my head but mediated through paper and ink – the computer is therefore an inscription device at the end of thinking. I dwell on paper, as it were, and then move over into a digital medium. You know, I do not use any concept mapping softwares, I just find them too clumsy and too annoying actually. 

KS: Okay, so what puts you off not using / not being tempted by using all those tools that offer you help and offer to make you more productive?

DB: Well, because firstly, I do not want to be more productive, and secondly I do not think they help. So the first thing I tell my new students, including new Ph.D. students, is: buy a note book and a pen and start taking notes. Do not think that the computer is your tool, or your servant. The computer will be your hindrance, particularly in the early stages of a Ph.D. It is much more important to carefully review and think through things. And that is actually the hardest thing to do, especially in this world of tweets and messages and emails – distractions are everywhere. There are no tweets in my book, thankfully, and it is the slowness and leisureliness that enables me to create a space for thinking. It is a good way of training your mind to pause and think before responding.

KS: So, you are saying that online tools kind of distract us from thinking and actually we think that we are doing a lot of stuff but actually we are not doing that much, right?

DB: Well, the classic problem is students that, for example, think they are doing an entirely new research project and map it all out in a digital tool that allows you to do fancy graphs, etc. – but they are not asking any kind of interesting research questions because they have not actually looked at the literature and they do not know the history of their subject. So it is very important that we do this, indeed some theorists have made the argument that we are forgetting our histories. And I think this is very true. The temptation to be in the future, to catch the latest wave or the latest trend affects Ph.D. students and academics as much as everybody else. And there are great dangers from chasing those kinds of solutions. Academia used to be about taking your time and being slow and considering things. And I think in the digital age academia’s value is that it can continue to do that, at least I hope so.

KS: Okay, but is there not a danger that if you say: okay, I am taking my time, I am taking my paper and my pen while others are hacking away, being busy using all those online tools, and in a way you could say okay that speeds up some part of research, at least when you draw out the cumulative essence of it, can you afford to invest the time?

DB: Well, it is not either or. It is both. The trouble is, I find anyway, with Ph.D. students, their rush to use the digital tools is to prevent them from having to use the paper. And, a classic example of this is Endnote. Everybody rushes to use Endnote because they do not like doing bibliographies. But actually, doing the bibliography by hand is one of the best things you can do because you learn your field’s knowledge, and you immediately recognise names because you are the one typing them in. Again this is a question of embodiment. When you leave that to a computer program to do it for you, laziness emerges – and you just pick and choose names to scatter over your paper. So, I am not saying you should not use such tools, I am saying that you should maybe do both. I mean, I never use these tools to construct bibliographies, I do them by hand because it encourages me to think through, what about this person are they really contributing, what do they add? And I think that is really important.

KS: Although, it probably should be more about, okay what do I remember this persons writing, and what have they contributed and not so much about whose name sounds fancy and which names do I need to drop here.

DB: Totally. Well, there has been some interesting work on this. Researchers have undertaken bibliometric analysis to show how references are used in certain disciplines and how common citations crop up again and again because they were used in previous papers and researchers feel the need to mention them again – so it becomes a name-checking exercise. Interestingly, few people go back and read these original canonical papers. So it is really important to read early work in a field, and place it within an historical context and trajectory, if one is to make sense of the present.

KS: A last question, I want to ask you about collaborative writing, do you write with other people and if so, how does that work? Where do you see advantages and where do you see possible trouble?

DB: Yes, I do. I have been through the whole gamut of collaborative writing, so I have seen both the failures and the successes. Collaborative writing is never easy, first and foremost. Particularly I think for humanities’ academics, because we are taught and we are promoted on the basis of our name being on the front of a paper or on the cover of a book. This obviously adds its own complications, plus you know academics tend to be very individualistic, and there is always questions about –

KS: …in spite of all the collaboration, right?


DB: Indeed, yes of course, I mean that is just the academic way, but I think you need that, because writing a book requires you to sit in a room for months and months and months and the sun is shining, everyone else having fun and you are sitting there in a gloomy room typing away, so you need that kind of self-drive and belief, and that, of course, causes frictions between people. So I have tried various different methods of working with people, but one method I found particularly interesting is a method called booksprinting. It is essentially a time-boxed process where you come together with, let us say, four or five other scholars, you are locked in a room for the week (figuratively speaking!), except to sleep and you eat together, write together, concept map and develop a book, collaboratively. And then the book that is produced is jointly authored, there is no arguments over that, if you do not agree you can leave, but the point is that the collaborative output is understood and bought into by all the participants. Now, to many academics this sounds like absolute horror, and indeed when I was first asked if I would like to be involved I was sceptical – I went along but I was sure this was going to be a complete failure. However it was one of the most interesting collaborative writing processes I have been involved in. I have taken part in two book sprints to date (three including 2014). You are welcome to have a look at the first book, it is called New Aesthetic New Anxieties. It is amazing how productive those kinds of collaborative writing processes can be. But it has to be a managed process. So, do check out booksprinting, it is very interesting – see also Imaginary museums, Computationality & the New Aesthetic and On Book Sprints.

KS: Okay, but then for that to work what do you actually / from your experience, can you draw out factors that make it work?

DB: Sure. The most important factor is having a facilitator, so someone who does not write. And the facilitators role is to make sure that everybody else does write.  And that is an amazing ability, a key person, because they have to manage difficult people and situations – it is like herding cats. Academics do not like to be pushed, for example. And the facilitator I have worked with, he is very skilled at this kind of facilitation. The second thing is the kinds of writing that you do and how you do it. The booksprinting process I have been involved in has been very paper-based, so again there is a lot of paper everywhere, there are post-it notes, there is a lot of sharing of knowledge, and this is probably the bit you are going to find interesting: There is, nonetheless, a digital tool which enables you to write collaboratively. It is a cleverly written tool, it has none of the bells and whistles, it is very utilitarian and really focuses the writing process and working together. And, having seen this used out on two different booksprints, I can affirm that it does indeed help the writing process. I recommend you have a look.

KS: So, what is the tool?

DB: It is called Booktype. And Adam Hyde is the facilitator who developed the process of Book Sprints, and is also one of the developers of the software.

KS: Okay, interesting. Any questions? Or any question I did not ask you, anything you want to add that we have missed out, any final thoughts? Any questions for me?

DB: Yes, I do think that a genealogy of “open science” is important and your questions are really interesting because they are informed by certain assumptions about what open science is. In other words, there is a certain position you are taking which you do not make explicit, and which I find interesting. So it might be useful to reflect on how “open science” needs to critically unpacked further.

KS: Okay, great, thank you very much.

DB: My pleasure.

KS: Thanks.

DB: Thank you.






Interview archived at Zenodo. Transcript corrected from the original to remove errors and clarify terms and sentences. 

On Capture

In thinking about the conditions of possibility that make possible the mediated landscape of the post-digital (Berry 2014) it is useful to explore concepts around capture and captivation, particularly as articulated by Rey Chow (2012). Chow argues the being “captivated” is

the sense of being lured and held by an unusual person, event, or spectacle. To be captivated is to be captured by means other than the purely physical, with an effect that is, nonetheless, lived and felt as embodied captivity. The French word captation, referring to a process of deception and inveiglement [or persuade (someone) to do something by means of deception or flattery] by artful means, is suggestive insofar as it pinpoints the elusive yet vital connection between art and the state of being captivated. But the English word “captivation” seems more felicitous, not least because it is semantically suspended between an aggressive move and an affective state, and carries within it the force of the trap in both active and reactive senses, without their being organised necessarily in a hierarchical fashion and collapsed into a single discursive plane (Chow 2012: 48). 

To think about capture then is to think about the mediatized image in relation to reflexivity. For Chow, Walter Benjamin inaugurated a major change in the the conventional logic of capture, from a notion of reality being caught or contained in the copy-image, such as in a repository, the copy-image becomes mobile and this mobility adds to its versatility. The copy-image then supersedes or replaces the original as the main focus, as such this logic of the mechanical reproduction of images undermines hierarchy and introduces a notion of the image as infinitely replicable and extendable.  Thus the “machinic act or event of capture” creates the possibility for further dividing and partitioning, that is for the generation of copies and images, and sets in motion the conditions of possibility of a reality that is structured around the copy.

Chow contrasts capture to the modern notion of “visibility” such that as Foucault argues “full lighting and the eyes of a supervisor capture better than darkness, which ultimately protected. Visibility is a trap” (Foucault 1991: 200). Thus in what might be thought of as the post-digital – a term that Chow doesn’t use but which I think is helpful in thinking about this contrast – what is at stake is no longer this link between visibility and surveillance, indeed nor is the link between becoming-mobile and the technology of images, but rather the collapse of the “time lag” between the world and its capture.

This is when time loses its potential to “become fugitive” or “fossilised” and hence to be anachronistic. The key point being that the very possibility of memory is disrupted when images become instantaneous and therefore synonymous with an actual happening. Thus in a condition of the post-digitial, whereby digital technologies make possible not only the instant capture and replication of an event, but also the very definition of the experience through its mediation both at the moment of capture – such as with the waving smart phones at a music concert or event  – but also in the subsequent recollection and reflection on that experience.

Thus the moment of capture or “arrest” is an event of enclosure, locating and making possible the sharing and distribution of a moment through infinite reproduction and dissemination. So capture represents a techno-social moment but is also discursive in that it is a type of discourse that is derived from the imposition of power on bodies and the attachment of bodies to power. This Chow calls a heteronomy or heteropoiesis, as in a system or artefact designed by humans, with some purpose, but not able to self-reproduce but which is yet able to exert agency in the form of prescription often back onto its designers. Essentially producing an externality in relation to the application of certain “laws” or regulations.

Nonetheless, capture and captivation also constitute a critical response through the possibility of a disconnecting logic and the dynamics of mimesis. This possibility reflected through the notion of entanglements refers to the “derangements in the organisation of knowledge caused by unprecedented adjacency and comparability or parity”. This is, of course, definitional in relation to the notion of computation when itself works through a logic of formatting, configuration, structuring and the application of computational ontologies (Berry 2011, 2014).

Here capture offers the possibility of a form of practice in relation to alienation by making the inquirer adopt a position of criticism, the art of making strange. Chow here is making links to Brecht and Shklovsky, and in particular their respective predilection for estrangement in artistic practice, such as in Brecht’s notion of verfremdung, and thus to show how things work, whilst they are being shown (Chow 2012: 26-28). In this moment of alienation the possibility is thus raised of things being otherwise. This is the art of making strange as a means to disrupt the everyday conventionalism and refresh the perception of the world – art as device. The connections between techniques of capture and critical practice as advocated by Chow, and reading or writing the digital are suggestive in relation to computation more generally, not only in artistic practice but also in terms of critical theory. Indeed, capture could be a useful hinge around which to subject the softwarization practices, infrastructures and experiences of computation to critical thought both in terms of their technical and social operations but also to the extent to which they generate a coercive imperative for humans to live and stay alive under the conditions of a biocomputational regime.

Bibliography

Berry, D. M. (2011) The Philosophy of Software, London: Palgrave.

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury.

Chow, R. (2012) Entanglements, or Transmedial Thinking about Capture, London: Duke University Press.

Foucault, M. (1991) Discipline and Punish, London: Penguin Social Sciences.

The Post-Digital

Courbet, Gustave-The Painter’s Studio; A Real Allegory (1855)

As we increasingly find that the world of computational abundance is normalised, the application of cheap digital technologies to manage or partially augmented traditionally analogue experiences, technologies and practices will doubtless grow.[1] That is, the power of “compute” is growing both in breadth and depth as it permeates society and culture (see Davies 2013; Berry 2014a). All around us we are increasingly surrounded by new fields and flows of computation that co-construct and stabilise a new artifice for the human sensorium – streams, clouds, sensors and infrastructures. Not unlike previous moments in which mediums become part of everyday life, this new field is noticeable for its ability to modulate and transform itself through the use of algorithms and code. Not just as a general plasticity but as a flexible structure that adapts to context and environment tailored to the individual, or perhaps better, dividual, of the computational age. This new field of computation is not necessarily top-down and corporate controlled either. Thus, we see at a bottom-up level, the emergence of a market in cheap digital processors that enable the implementation of innovative new forms of culture and cultural experimentation. We might think of these moments as part of the constellation I am calling the “post-digital” (see also Berry 2013a; Cramer 2013; Cox 2013; Philipsen 2013; Sable 2012).

Museu de Arte de São Paulo (MASP), 1968.
Designed by Lina Bo Bardi

Thus, the historical distinction between the digital and the non-digital becomes increasingly blurred, to the extent that to talk about the digital presupposes a disjuncture in our experience that makes less and less sense. Thus computation becomes spatial in its implementation, embedded within the environment and part of the texture of life itself which can be walked around, touched, manipulated and interacted with in a number of ways and means – life becomes mediated in and through the computal (Berry 2014b). Indeed, in a similar way in which the distinction between “being online” or “being offline” has become anachronistic, with our always-on smart phones and tablets and widespread wireless networking technologies, so too, perhaps, the term “digital” describes a world of the past.

Which is not to say that time is not an important aspect to computation in this post-digital world. The compressive effects of computation and the flattening metaphors and visual language of computation tend towards an encounter, maximised perhaps by its tendency toward spatiality, to transform time from a diachronic to a synchronic experience. Indeed, history itself may be re-presented through the screen through a number of computation functions and methods that make it seem geometric, flat and simultaneous. A sense of history is then a sense of real-time flows, not so much distant and elusive, whether as cultural or individual memory, but here and now, spectacular and vividly represented and re-presented. Time in this sense is the time of technical time, and the history attendant to it is technical history, presented through databases, code and algorithms.

Thus within a time of computational abundance we might think in relation to the question of the “post-digital”, in as much as we are rapidly entering a moment when the difficulty will be found in encountering culture outside of digital media. Or perhaps the non-digital will largely be the preserve of the elite (by choice, education and wealth) or the very poor (by necessity).  The detritus of society will be cast into the non-digital and the fading and ephemeral will be preserved within computational databanks only, if it is preserved at all. Indeed, even the non-digital becomes bound up in the preservation possibilities offered by the digital,

Non-digital media technologies… become post-digital when they are not simply nostalgically revived, but functionally repurposed in (often critical) relation to digital media technologies: zines that become anti- or non-blogs, vinyl as anti-CD, cassette tapes as anti-mp3, analog film as anti-video (Cramer 2013).

Computal Surfaces: main stage for the
Republican convention in Tampa, Fla (2012)

In a post-digital age, whether something is digital or not will no longer be seen as the essential question. Or rather, the question as to whether something is or is not “digital” will be increasingly meaningless as all forms of media become themselves mediated, produced, accessed, distributed or consumed through digital devices and technologies. This is, to move away from a comparative notion of the digital, contrasted with other material forms such as paper, celluloid or photopaper, and instead begin to think about how the digital is modulated within various materialities. It is also when the contrast between “digital” and “analogue” no longer makes sense either. This spectrum of the digital, a distribution across an axis of more of less computal, gives rise to the expectation of the always already computational of everyday life.

Muffwiggler, Modular Synth Meetup,
University of Sussex (2013).

Thus, the post-digital is represented by and indicative of a moment when the computational has become both hegemonic and post-screenic (see Bosma 2013; Ludovico 2013). As Cramer argues, “the distinction between ‘old’ and ‘new’ media collapses in theory as well as in practice. As Kenneth Goldsmith observes, his students ‘mix oil paint while Photoshopping and scour flea markets'” (Cramer 2013). The “digital” is then understood as a previous historic moment when computation as digitality was understood in opposition to the analogue, although that is not to say that it will not remain as a marginal notion with related practices within post-digitality. Thus, under our contemporary conditions it might be better to think about modulations of the digital or different intensities of the computational as a post-digital moment rather than digital versus analogue as such. We should therefore critically think about the way in which cadences of the computational are made and materialised. In other words, notions of quantitative and qualitative dimensions of “compute” will be increasingly important for thinking about culture, economics, society, politics and everyday life. Tracing power will in many cases be tracing compute, both in terms of the reservoirs of compute managed by gigantic computational Stacks, but also in the places where compute is thin and poorly served. By Stacks, I am referring to the corporations that increasingly rely on computational “technology stacks” for profit and power, such as Google, Apple, Facebook, Twitter and Amazon but also the technical imaginary formed through the notion of these stacks as a diagram (Berry 2013b).

Cuddlebot“: low-tech touch/haptic sensing hardware (2013)

Compute as already always part of life might also herald that the moment of the digital as digitalisation is already the past, and that new challenges lie ahead for thinking about the way in which the computal saturates our culture, institutions and everyday life in varying degrees of modularity and intensity. This growth in computation has put citizens at an obvious disadvantage in a society that not only has historically tended to disavow the digital as a form of knowledge or practice, but also has not seen computational thinking or skills as part of the educational requirements of a well-informed citizen. For example, the lack of understanding of the importance of encryption and cryptography in digital society was humbly described recently by Glenn Greenwald, who one might have thought to have been better schooled in these technologies (Greenwald 2013). Indeed, as computer power has increased, so has the tendency to emulate older media forms to provide content within simulations of traditional containers, such as “e”-books, through techniques of skeuomorphism and glossy algorithmic interface design – rather than learning and teaching computational practices as such. This, perhaps, has the advantage of new computational forms being able to be used and accessed without the requisite computational skills to negotiate the new literary machines of computation, such as the underlying logics, structures, processes and code. However, it also means that in many cases today, we are unable to read what we write, and are not always the writers of the systems that are built around us (Berry 2011; Oliver, Savičić and Vasiliev 2011; Allen 2013). This illiteracy does not seem to be the ideal conditions for the emergence of an informed and educated citizenry to engage with the challenges and dangers of a fully softwarized post-digital society. It also points to the urgent need for a critical and engaged Bildung for the post-digital world, if it is not to become precariously post-democratic.


Notes

[1] This post was inspired by attending “Muffwiggler” at the University of Sussex, Saturday 16 November 2013, organised by Andrew Duff, and funded by the Centre for Digital Material Culture. The event was notionally a homage to analogue synths, but in reality was colonised by digital/analogue hybrid synthesisers and controllers which were properly post-digital in both form and function. More information http://www.muffwiggler.com and http://www.flickr.com/photos/du_ff/sets/72157632801557258/

Bibliography

Allen, J. (2013) Critical Infrastructure, accessed 31/12/2013, http://post-digital.projects.cavi.dk/?p=356

Berry, D. M. (2011) The Philosophy of Software, London: Palgrave Macmillan.

Berry, D. M. (2013a) Post-Digital Humanities, Stunlaw, accessed 30/12/2013,  http://stunlaw.blogspot.co.uk/2013/10/post-digital-humanities.html

Berry, D. M. (2013b) Digital Breadcrumbs, Stunlaw, accessed 30/12/2013, http://stunlaw.blogspot.co.uk/2013/10/digital-breadcrumbs.html

Berry, D. M. (2014a) On Compute, Stunlaw, accessed 05/01/2014, http://stunlaw.blogspot.co.uk/2014/01/on-compute.html

Berry, D. M. (2014b) Critical Theory and the Digital, New York, Continuum/Bloomsbury Academic.

Bosmas, J. (2013) Post-Digital is Post-Screen – Shaping a New Visuality, accessed 30/12/2013, http://post-digital.projects.cavi.dk/?p=580

Cox, G. (2013) some old problems with post–anything (draft version), accessed 30/12/2013, http://post-digital.projects.cavi.dk/?p=230

Cramer, F. (2013) Post-digital: a term that sucks but is useful (draft 2), accessed 30/12/2013, http://post-digital.projects.cavi.dk/?p=295

Davies, J. (2013) Compute Power with Energy- Efficiency, accessed 30/12/2013, http://developer.amd.com/wordpress/media/2013/06/Compute_Power_with_Energy-Efficiency_Jem_AMD_v1.1.pdf

Greenwald, G. (2013) 30c3 Keynote, Chaos Computer Club, accessed 30/12/2013,  http://media.ccc.de/browse/congress/2013/30C3_-_5622_-_en_-_saal_1_-_201312271930_-_30c3_keynote_-_glenn_greenwald_-_frank.html

Ludovico, A. (2013) Post Digital Publishing, Hybrid and Processual Objects in Print, accessed 30/12/2013, http://post-digital.projects.cavi.dk/?p=323

Oliver, J. Savičić, G. and Vasiliev, D. (2011) Critical Engineering Manifesto, accessed 31/12/2013, http://criticalengineering.org

Philipsen, L. (2013) Do not Return to Sender – Why post-digital aesthetic research should actually distinguish between artist, critics, and audience, accessed 30/12/2013, http://post-digital.projects.cavi.dk/?p=350

Sable, D. (2012) A “Post Digital” World, Really?, Google Think Insights, accessed 30/12/2013, http://www.google.com/think/articles/a-post-digital-world-really.html

Signposts for the Future of Computal Media

I would like to begin to outline what I think are some of the important trajectories to keep an eye on in regard to what I increasingly think of as computal media. That is, the broad area dependent on computational processing technologies, or areas soon to be colonised by such technologies.

In order to do this I want to examine a number of key moments that I want to use to structure thinking about the softwarization of media.  By “softwarization”, I means broadly the notion of Andreessen (2011) that “software is eating the world” (see also Berry 2011; Manovich 2013).  Softwarization is then a process of the application of computation (see Schlueter Langdon 2003), in this case, to all forms of historical media, but also in the generation of born-digital media. 
However, this process of softwarization is tentative, multi-directional, contested, and moving on multiple strata at different modularities and speeds. We therefore need to develop critiques of the concepts that drive these processes of softwarization but also to think about what kind of experiences that make the epistemological categories of the computal possible. For example, one feature that distinguishes the computal is its division into surfaces, rough or pleasant, and concealed inaccessible structures. 
It seems to me that this task is rightly one that is a critical undertaking. That is, as an historical materialism that understands the key organising principles of our experience are produced by ideas developed with the array of social forces that human beings have themselves created. This includes understanding the computal subject as an agent dynamically contributing and responding to the world. 
So I want to now look at a number of moments to draw out some of what I think are the key developments to be attentive to in computal media. That is, not the future of new media as such, but rather “possibilities” within computal media, sometimes latent but also apparent. 
The Industrial Internet
A new paradigm called the “industrial internet” is emerging, a computational, real-time streaming ecology that is reconfigured in terms of digital flows, fluidities and movement. In the new industrial internet the paradigmatic metaphor I want to use is real-time streaming technologies and the data flows, processual stream-based engines and the computal interfaces and computal “glue” holding them together. This is the internet of things and the softwarization of everyday life and represents the beginning of a post-digital experience of computation as such.
This calls for us to stop thinking about the digital as something static, discrete and object-like and instead consider ‘trajectories’ and computational logistics. In hindsight, for example, it is possible to see that new media such as CDs and DVDs were only ever the first step on the road to a truly computational media world. Capturing bits and disconnecting them from wider networks, placing them on plastic discs and stacking them in shops for us to go visit and buy seems bizarrely pedestrian today. 
Taking account of such media and related cultural practices becomes increasing algorithmic and as such media becomes itself mediated via software. At the same time previous media forms are increasingly digitalised and placed in databases, viewed not on original equipment but accessed through software devices, browsers and apps. As all media becomes algorithmic, it is subject to monitoring and control at a level to which we are not accustomed – e.g. Amazon mass deletion of Orwell’s1984 from personal Kindles in 2009 (Stone 2009).

The imminent rolling out of the sensor-based world of the internet of things is underway with companies such as Broadcom developing Wireless Internet Connectivity for Embedded Devices, “WICED Direct will allow OEMs to develop wearable sensors — pedometers, heart-rate monitors, keycards — and clothing that transmit everyday data to the cloud via a connected smartphone or tablet” (Seppala 2013). Additionally Apple is developing new technology in this area with its iBeacon software layer which uses Bluetooth Low Energy (BLE) to create location-aware micro-devices, and “can enable a mobile user to navigate and interact with specific regions geofenced by low cost signal emitters that can be placed anywhere, including indoors, and even on moving targets” (Dilger 2013). In fact, the “dual nature of the iBeacons is really interesting as well. We can receive content from the beacons, but we can be them as well” (Kosner 2013).  This relies on Bluetooth version 4.0, also called “Bluetooth Smart”, that supports devices that can be powered for many months by a small button battery, and in some cases for years. Indeed,

BLE is especially useful in places (like inside a shopping mall) where GPS location data my not be reliably available. The sensitivity is also greater than either GPS or WiFi triangulation. BLE allows for interactions as far away as 160 feet, but doesn’t require surface contact (Kosner 2013).

These new computational sensors enable Local Positioning Systems (LPS) or micro-location, in contrast to the less precise technology of Global Positioning Systems (GPS). These “location based applications can enable personal navigation and the tracking or positioning of assets” to the centimetre, rather than the metre, and hence have great potential as tracking systems inside buildings and facilities (Feldman 2009).

Bring Your Own Device (BYOD)
This shift also includes the move from relatively static desktop computers to mobile computers and to tablet based devices – consumerisation of tech. Indeed, according to the International Telecommunications Union (ITU 2012: 1), in 2012 there were 6 billion mobile devices (up from 2.7 billion in 2006), with YouTube alone streaming video media of 200 terrabytes per day. Indeed, by the end of 2011, 2.3 billion people (i.e. one in three) were using the Internet (ITU 2012: 3).
Users are creating 1.8 zettabytes of data annually by 2011 and this is expected to grow to 7.9 zettabytes by 2015 (Kalakota 2011). To put this in perspective, a zettabyte is is equal to 1 billion terabytes – clearly at these scales the storage sizes become increasingly difficult for humans to comprehend. A zettabyte is roughly equal in size to twenty-five billion Blu-ray discs or 250 billion DVDs.

The acceptance by users and providers of the consumerisation of technology has also opened up the space for the development of “wearables” and these highly intimate devices are under current development, with the most prominent example being Google Glass. Often low-power devices, making use of the BLE and iBeacon type technologies, they augment our existing devices, such as the mobile phone, rather than outright replacing them, but offer new functionalities, such as fitness monitors, notification interfaces, contextual systems and so forth. 

The Personal Cloud (PC)
These pressures are creating an explosion in data and a corresponding expansion in various forms of digital media (currently uploaded to corporate clouds). As a counter move to the existence of massive centralised corporate systems there is a call for Personal Cloud (PCs), a decentralisation of data from the big cloud providers (Facebook, Google, etc.) into smaller personal spaces (see Personal Cloud 2013). Conceptually this is interesting in relation to BYOD. 
This of course changes our relationship to knowledge, and the forms of knowledge which we keep and are able to use. Archives are increasingly viewed through the lens of computation, both in terms of cataloging and storage but also in terms of remediation and configuration. Practices around these knowledges are also shifting, and as social media demonstrates, new forms of sharing and interaction are made possible. Personal Cloud also has links to decentralised authentication technologies (e.g. DAuth vs OAuth).
Digital Media, Social Reading, Sprints
It has taken digital a lot longer that many had thought to provide a serious challenge to print, but it seems to me that we are now in a new moment in which digital texts enable screen-reading, if it is not an anachronism to still call it that, as a sustained reading practice. The are lots of experiments in this space, e.g. my notion of the “minigraph” (Berry 2013) or the mini-monograph, technical reports, the “multigraph” (McCormick 2013), pamphlets, and so forth. Also new means for writing (e.g. Quip) and social reading and collaborative writing (e.g. Book Sprints)
DIY Encryption and Cypherpunks
Together, these technologies create contours of a new communicational landscape appearing before us, and into which computational media mediates use and interaction. Phones become smart phones and media devices that can identify, monitor and control our actions and behaviour  through anticipatory computing. Whilst seemingly freeing us, we are also increasingly enclosed within an algorithmic cage that attempts to surround us with contextual advertising and behavioural nudges.
One response could be “Critical Encryption Practices”, the dual moment of a form of computal literacy and understanding of encryption technologies and cryptography combined with critical reflexive approaches. Cypherpunk approaches tend towards an individualistic libertarianism, but there remains a critical reflexive space opened up by their practices. Commentators are often dismissive of encryption as a “mere” technical solution to what is also a political problem of widespread surveillance. 
CV Dazzle Make-up, Adam Harvey
However, Critical encryption practices could provide both the political, technical and educative moments required for the kinds of media literacies important today – e.g. in civil society. 
This includes critical treatment of and reflection on crypto-systems such as cryptocurrencies like Bitcoin, and the kinds of cybernetic imaginaries that often accompany them. Critical encryption practices could also develop signaling systems – e.g. new aesthetic and Adam Harvey’s work. 
Augmediated Reality
The idea of supplementing or augmenting reality is being transformed with the notion of “augmediated” technologies (Mann 2001). These are technologies that offer a radical mediation of everyday life via screenic forms (such as “Glass”) to co-construct a computally generated synoptic meta-reality formed of video feeds, augmented technology and real-time streams and notification. Intel’s work of Perceptual Computing is a useful example of this kind of media form. 
The New Aesthetic
These factors raise issues of new aesthetic forms related to the computal. For example, augmediated aesthetics suggests new forms of experience in relation to its aesthetic mediation (Berry et al 2012). The continuing “glitch” digital aesthetic remains interesting in relation to the new aesthetic and aesthetic practice more generally (see Briz 2013). Indeed, the aesthetics of encryption, e.g. “complex monochromatic encryption patterns,” the mediation of encryption etc. offers new ways of thinking about the aesthetic in relation to digital media more generally and the post-digital (see Berry et al 2013)
Bumblehive and Veillance
Within a security setting one of the key aspects is data collection and it comes as no surprise that the US has been at the forefront of rolling out gigantic data archive systems, with the NSA (National Security Agency) building the country’s biggest spy centre at its Utah Data Center (Bamford 2012) – codenamed Bumblehive. This centre has a “capacity that will soon have to be measured in yottabytes, which is 1 trillion terabytes or a quadrillion gigabytes” (Poitras et al 2013). 
This is connected to the notion of the comprehensive collection of data because, “if you’re looking for a needle in the haystack, you need a haystack,” according to Jeremy Bash, the former CIA chief of staff. The scale of the data collection is staggering and according to Davies (2013) the UK GCHQ has placed, “more than 200 probes on transatlantic cables and is processing 600m ‘telephone events’ a day as well as up to 39m gigabytes of internet traffic. Veillance – both surveillance and sousveillence are made easier with mobile devices and cloud computing. We face rising challenges for responding to these issues. 
The Internet vs The Stacks
The internet as we tend to think of it has become increasingly colonised by massive corporate technology stacks. These companies, Google, Apple, Facebook, Amazon, Microsoft, are called collectively “The Stacks” (Sterling, quoted in Emami 2012) – vertically integrated giant social media corporations. As Sterling observes,

[There’s] a new phenomena that I like to call the Stacks [vertically integrated social media]. And we’ve got five of them — Google, Facebook, Amazon, Apple and Microsoft. The future of the stacks is basically to take over the internet and render it irrelevant. They’re not hostile to the internet — they’re just [looking after] their own situation. And they all think they’ll be the one Stack… and render the others irrelevant… They’re annihilating other media… The Lords of the Stacks (Sterling, quoted in Emami 2012).

The Stacks also raise the issue of resistance and what we might call counter-stacks,  hacking the stacks, and movements like Indieweb and Personal Cloud computing are interesting responses to them and Sterling optimistically thinks, “they’ll all be rendered irrelevant. That’s the future of the Stacks” (Sterling, quoted in Emami 2012). 
The Indieweb
The Indieweb is a kind of DIY response to the Stacks and an attempt to wrestle back some control back from these corporate giants (Finley 2013). These Indieweb developers offer an interesting perspective on what is at stake in the current digital landscape, somewhat idealistic and technically oriented they nonetheless offer a site of critique. They are also notable for “building things”, often small scale, micro-format type things, decentralised and open source/free software in orientation. The indieweb is, then, “an effort to create a web that’s not so dependent on tech giants like Facebook, Twitter, and, yes, Google — a web that belongs not to one individual or one company, but to everyone” (Finley 2013).
Push Notification
This surface, or interactional layer, of the digital is hugely important for providing the foundations through which we interact with digital media (Berry 2011). Under development are new high-speed adaptive algorithmic interfaces (algorithmic GUIs) that can offer contextual information, and even reshape the entire interface itself, through the monitoring of our reactions to computational interfaces and feedback and sensor information from the computational device itself – e.g. Google Now. 
The Notification Layer
One of the key sites for reconciliation of the complexity of real-time streaming computing is the notification layer, which will increasingly by an application programming interface (API) and function much like a platform. This is very much the battle taking place between the “Stacks”, e.g. Google Now, Siri, Facebook Home, Microsoft “tiles”, etc. With the political economy of advertising being transformed with the move from web to mobile, notification layers threaten revenue streams. 
It is also a battle over subjectivity and the kind of subject constructed in these notification systems.
Real-time Data vs Big Data
We have been hearing a lot about “big data” and related data visualisation, methods, and so forth. Big data (exemplified by the NSA Prism programme) is largely a historical batch computing system. A much more difficult challenge is real-time stream processing, e.g. future NSA programmes called SHELLTRUMPET, MOONLIGHTPATH, SPINNERET and GCHQ Tempora programme. 
That is, monitoring in real-time, and being able to computationally spot patterns, undertake stream processing, etc.
Contextual Computing
With multiple sensors built into new mobile devices (e.g. camera, microphones, GPS, compass, gyroscopes, radios, etc.) new forms of real-time processing and aggregation become possible.  In some senses then this algorithmic process is the real-time construction of a person’s possible “futures” or their “futurity”, the idea, even, that eventually the curation systems will know “you” better than you know yourself – interesting for notions of ethics/ethos. This the computational real-time imaginary envisaged by corporations, like Google, that want to tell you what you should be doing next…
Anticipatory Computing
Our phones are now smart phones, and as such become media devices that can also be used to identify, monitor and control our actions and behavior  through anticipatory computing. Elements of subjectivity, judgment and cognitive capacities are increasingly delegated to algorithms and prescribed to us through our devices, and there is clearly the danger of a lack of critical reflexivity or even critical thought in this new subject. This new paradigm of anticipatory computing stresses the importance of connecting up multiple technologies to enable a new kind of intelligence within these technical devices. 
Towards a Critical Response to the Post-Digital
Computation in a post-digital age is fundamentally changing the way in which knowledge is created, used, shared and understood, and in doing so changing the relationship between knowledge and freedom. Indeed, following Foucault (1982) the “task of philosophy as a critical analysis of our world is something which is more and more important. Maybe the most certain of all philosophical problems is the problem of the present time, and of what we are, in this very moment… maybe to refuse what we are” (Dreyfus and Rabinow 1982: 216). 
One way of doing this is to think about Critical Encryption Practices, for example, and the way in which technical decisions (e.g. plaintext defaults on email) are made for us. The critique of knowledge also calls for us to question the coding of instrumentalised reason into the computal. This calls for a critique of computational knowledge and as such a critique of the society producing that knowledge. 
Bibliography
Andreessen, M. (2011) Why Software Is Eating The World, Wall Street Journal, August 20 2011, http://online.wsj.com/article/SB10001424053111903480904576512250915629460.html#articleTabs%3Darticle
Bamford, J. (2012) The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say), Wired, accessed 19/03/2012, http://www.wired.com/threatlevel/2012/03/ff_nsadatacenter/all/1
Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave Macmillan.
Berry, D. M. (2013) The Minigraph: The Future of the Monograph?, Stunlaw, accessed 29/08/2013, http://stunlaw.blogspot.nl/2013/08/the-minigraph-future-of-monograph.html
Berry, D. M., Dartel, M. v., Dieter, M., Kasprzak, M. Muller, N., O’Reilly, R., and Vicente, J. L (2012) New Aesthetic, New Anxieties, Amsterdam: V2 Press.
Berry, D. M., Dieter, M., Gottlieb, B., and Voropai, L. (2013) Imaginary Museums, Computationality & the New Aesthetic, BWPWAP, Berlin: Transmediale.
Briz, N. (2013) Apple Computers, accessed 29/08/2013, http://nickbriz.com/applecomputers/
Davies, N. (2013) MI5 feared GCHQ went ‘too far’ over phone and internet monitoring, The Guardian, accessed 22/06/2013, http://www.guardian.co.uk/uk/2013/jun/23/mi5-feared-gchq-went-too-far
Dilger, D.E. (2013) Inside iOS 7: iBeacons enhance apps’ location awareness via Bluetooth LE,
AppleInsider, accessed 02/09/2013, http://appleinsider.com/articles/13/06/19/inside-ios-7-ibeacons-enhance-apps-location-awareness-via-bluetooth-le

Emami, G (2012) Bruce Sterling At SXSW 2012: The Best Quotes, The Huffington Post, accessed 29/08/2013, http://www.huffingtonpost.com/2012/03/13/bruce-sterling-sxsw-2012_n_1343353.html
Feldman, S. (2009) Micro-Location Overview: Beyond the Metre…to the Centimetre, Sensors and Systems, accessed 02/09/2013, http://sensorsandsystems.com/article/columns/6526-micro-location-overview-beyond-the-metreto-the-centimetre.html

Finley, K. (2013) Meet the Hackers Who Want to Jailbreak the Internet, Wiredhttp://www.wired.com/wiredenterprise/2013/08/indie-web/
ITU (2012) Measuring the Information Society, accessed 01/01/2013, http://www.itu.int/ITU-D/ict/publications/idi/material/2012/MIS2012-ExecSum-E.pdf
Kalakota, R. (2011) Big Data Infographic and Gartner 2012 Top 10 Strategic Tech Trends, accessed 05/05/2012, http://practicalanalytics.wordpress.com/2011/11/11/big-data-infographic-and-gartner-2012-top-10-strategic-tech-trends

Kosner, A. W. (2013) Why Micro-Location iBeacons May Be Apple’s Biggest New Feature For iOS 7, Forbes, accessed 02/09/2013, http://www.forbes.com/sites/anthonykosner/2013/08/29/why-micro-location-ibeacons-may-be-apples-biggest-new-feature-for-ios-7/

Mann, S. (2001) Digital Destiny and Human Possibility in the Age of the Wearable Computer, London: Random House.


Manovich, L. (2013) Software Takes Command, MIT Press.
McCormick, T. (2013) From Monograph to Multigraph: the Distributed Book, LSE Blog: Impact of Social Sciences, accessed 02/09/2013, http://blogs.lse.ac.uk/impactofsocialsciences/2013/01/17/from-monograph-to-multigraph-the-distributed-book/

Personal Cloud (2013) Personal Clouds, accessed 29/08/2013, http://personal-clouds.org/wiki/Main_Page
Poitras, L., Rosenbach, M., Schmid, F., Stark, H. and Stock, J. (2013) How the NSA Targets Germany and Europe, Spiegel, accessed 02/07/2013, http://www.spiegel.de/international/world/secret-documents-nsa-targeted-germany-and-eu-buildings-a-908609.html
Schlueter Langdon, C. 2003. Does IT Matter? An HBR Debate–Letter from Chris Schlueter Langdon. Harvard Business Review (June): 16, accessed 26/08/2013, http://www.ebizstrategy.org/research/HBRLetter/HBRletter.htm and http://www.simoes.com.br/mba/material/ebusiness/ITDOESNTMATTER.pdf
Seppala, T. J. (2013) Broadcom adds WiFi Direct to its embedded device platform, furthers our internet-of-things future, Engadget, accessed 02/09/2013, http://www.engadget.com/2013/08/27/broadcom-wiced-direct/

Stone, B. (2009) Amazon Erases Orwell Books From Kindle, The New York Times, accessed 29/08/2013, http://www.nytimes.com/2009/07/18/technology/companies/18amazon.html?_r=0

Undoing Property? by Lewandowska and Ptak

Undoing Property? is a wonderful project, the final piece of which is a book edited by Marysia Lewandowska and Laurel Ptak (2013). Anyone familiar with my note-taking style will see that Lewandowska and Ptak have used my notes to “set the scene for the rest of the book’s contributions. [As] a great connective space reflecting our first discussion” (Lewandowska 2013).  It is more than a little surreal to see my notes remediated in this way, especially considering the pathways that mediation took, from initial discussion in The Showroom, through pen, paper and hand to large scale digital scanner, through email to Sweden where Konst & Teknik digitally edited the file and placed it within the digital book, and then on to Sternberg Press who then printed the book onto paper ready for distribution physically. Below, the mediated circuit is re-presented using photographs taken by Lewandowska which were digitally distributed through WeTransfer and email. The following set of images also, incidentally, reminds me of the Google Books scans with the inclusion of fingers 🙂

Undoing Property? 

Undoing Property? examines complex relationships of ownership that exist inside art, culture, political economy, immaterial production, and the public realm today. In its pages artists and writers address aspects of computing, curating, economy, ecology, gentrification, music, publishing, piracy, and much more.  Property shapes all social relations. Its invisible lines force separations and create power relations felt through the unequal distribution of what otherwise is collectively produced value. Over the last few years the precise question of what should be privately owned and publicly shared in society has animated intense political struggles and social movements around the world. In this shadow the publication’s critical texts, interviews and artistic interventions offer models of practice and interrogate diverse sites, from the body, to the courtroom, to the server, to the museum. The book asks why propertisation itself has changed so fundamentally over the last few decades and what might be done to challenge this. The book is a result of a four-year collaboration between London-based artist Marysia Lewandowska and New York-based curator Laurel Ptak. It is produced by Tensta Konsthall, Stockholm, Casco – Office for Art, Design and Theory, Utrecht, and The Showroom and published by Sternberg Press.

Undoing Property?
Edited by Marysia Lewandowska, Laurel Ptak
Contributions by: Agency, David Berry, Nils Bohlin, Sean Dockray, Rasmus Fleischer, Antonia Hirsch, David Horvitz, Mattin, Open Music Archive, Matteo Pasquinelli, Claire Pentecost, Florian Schneider, Matthew Stadler, Marilyn Strathern, Kuba Szreder, Marina Vishmidt.
Design by Konst & Teknik
Published by Sternberg Press
169 x 239 mm, 256 pages, 30 b/w illustrations, library-bound hardcover
ISBN 978-3-943365-68-9

Undoing Property? is produced in the context of the programme COHAB, a two-year collaboration between Casco – Office for Art, Design and Theory, Utrecht, Tensta Konsthall, Stockholm, and The Showroom. COHAB is supported by a Cooperation Measures grant from the European Commission Culture Programme (2007-2013).

Bibliography
Lewandowska, M. (2013) @berrydm it’s setting the scene for the rest of the book’s contributions. A great connective space reflecting our first discussion. Thanks., Twitter, accessed 30/6/2013, https://twitter.com/screened_out/status/351438371240951810

Lewandowska, M. and Ptak, L. (2013) Undoing Property?, Berlin: Sternberg Press.

BWPWAP – Some Thoughts On Transmediale 2013

Guest post by Katja Kwastek

Transmediale 2013, even more than previous years, operated on several intersecting layers and on an excess of different events and formats – including the intersections with Club Transmediale, the partner festival dedicated to music. This year’s title was BWPWAP, or Back When Pluto Was A Planet. I thought this was a great notion to emphasize how our worldview is made up not only of things but also of how we name and explain them – while Pluto did not at all change materially when demoted, our worldview/worldmodel did. Although, in the festival, the internet meme was mainly used for referring to the outdated or somewhat displaced. So there were lots of ‘retro’-references’, like fax-performances or a great letter shoot installation, the OCTO (http://telekommunisten.net/octo/).

At the beginning of the festival, at the opening ceremony, there was a reenactment of the demotion of Pluto, from planet status to “dwarf-planet,” given by Kristoffer Gansing. It is possible that the curators had hoped for the public to vote for Pluto to be upgraded back to Planet-status – but after a great American style pro-demotion presentation by Mike Brown, followed by a rather weak anti-demotion presentation, which inspired an amusing tweet, “BWPWAP = Back When Powerpoint Was All Paragraphs”, surprisingly the audience reaffirmed the demotion of Pluto to non-planet. 
In addition to the title, BWPWAP, there were four ‘threads’, entitled PAPER, NETWORKS, USER and DESIRE, and a parallel thread on the ‘Imaginary Museum’ – apparently a notion that Transmediale utilised for its self-reflection as a festival.  Regarding this excess of (often parallel) events and topics, my impression is of course very partial. Unfortunately, the exhibition was quite weak (though with a nice title: “the miseducation of Anya Major”). Actually, it was a combination of three exhibitions, one showcasing early works by Sonia Sheridan, interesting work historically, though not really suited to the solo-exhibition format; one showing ‘tools of distorted creativity’ (interface hacks etc.); and the best being the ‘Evil Media distribution Center’ by YoHa, a response to Mathew Fuller’s and Andrew Goffey’s book titled Evil Media (2013, MIT Press).  
The better exhibitions were the one organized by Club Transmediale, ‘In That Weird Part’ dedicated to the relations of music and internet culture, with projects like ‘Curating Youtube’, appropriations of the ‘Techno Viking’, misheard lyrics etc., and the one organized by LEAP, dedicated to abstract/scientific world models, with nice work by Sascha Pohflepp entitled ‘Yesterday’s today’.

Concerning the conference threads, I can’t say much about DESIRE, as I was only able to attend a sappy performance lecture by Sandy Stone. The USER thread had some interesting moments, though it did occasionally fall back to outdated notions of the user. It included presentations by Olia Lialina about the ‘general purpose user’ as a media competent user of software, and by Olga Goriunova about her idea of aesthetics as transindividuation applied to creative activities on online platforms, especially in relation to meme culture (a pertinent topic throughout the festival). 

The NETWORKS thread was very broad, with an interesting panel on ‘depletion design’, with David Berry, Jennifer Gabrys, Marie-Luise Angerer, presenting a recent book with the same title. There were also critiques of commercial social media platforms, for example in the keynote by Geert Lovink, who presented various projects by or related to the Institute of Network Cultures, or in Florian Alexander Schmidt’s analyses of crowdsourced design. 
The PAPER thread was interesting because it brought the ‘digital humanities’ to Transmediale, with continuous workshops on Post Digital Publishing which were located in a vacant spot under the staircase and allowed only for 10 participants, which had people continually crowding around the area. A highlight of this was the keynote by Kenneth Goldsmith on conceptual writing, with the provocative thesis: “with the rise of the web, writing has met its photography” – I don’t think I really agree, but it is worth thinking about it – and which goes into the whole discourse of computationality, which was also represented by David Berry. 
The ‘side threads’ on the imaginary museum and classification, represented by the Pluto metaphor, were also interesting conceptually, although my impressions was that all back-references to Malraux etc. have many shortcomings. There was a nice paper by Ian Hacking, given at the Marshall McLuhan lecture in the Canadian Embassy, about classification issues, labeling theory etc. 
Further general observations, I noticed that there was a very young audience in attendance, and although there were innumerable events, all were well attended. This struck me that Transmediale 2013 was more of a festival on digital culture than a festival of digital art.
Dr. Katja Kwastek is an art historian at the school of arts at Ludwig Maximilian University in Munich. She served as vice-director of the Ludwig Boltzmann Institute Media.Art.Research. in Linz (Austria), where she directed the research projects on interactive art until 2009. Prior to this, she worked as assistant professor at the art history department of the Ludwig Maximilian University in Munich and was a Visiting Scholar at the Rhode Island School of Design (Providence, RI). Her research focuses on contemporary and new media art, media theory and aesthetics. She has curated exhibition projects, lectured widely and published many books and essays, including Ohne Schnur. Art and Wireless Communication, Frankfurt (2004). She recently finished a book manuscript on the aesthetics of interaction in digital art (forthcoming MIT Press, 2013).



Setup Seminar: Understanding The New Aesthetic

A very enjoyable evening was spent at Setup, Utrecht, discussing the New Aesthetic with presentations by myself, Darko Fritz and Frank Kloos, organised by Daniëlle de Jonge. The discussion was opened up by Tijmen Schep who gave an interesting introduction to the main contours of the new aesthetic and explained why Setup had organised the evening lectures.

Darko Fritz tried to unpick the the claims of the new aesthetic to being either “new” or an “aesthetic” placing computer art and new media art within an art historical context. Frank Kloos gave a wonderful presentation with examples of the new aesthetic from a variety of different contexts, including datamoshing and recent use of the new aesthetic in music videos.

Overall the event was a great success with a really excellent audience composed on interesting people, experts and artists, and surprisingly the discussion around computation and the extent to which it has become part of everyday life was extremely vibrant and full of great contributions.

My earlier post on the New Aesthetic here.

Some pictures below.

Darko Fritz
Frank Kloos

Compos 68 in the audience.

Daniëlle de Jonge
Tijmen Schep

Advertisements