Category Archives: critical code

Interview with David M. Berry at re:publica 2013

Open science interview at re:publica conference in Berlin, 2013, by Kaja Scheliga.

Kaja ScheligaSo to start off…what is your field, what do you do?


David M. Berry: My field is broadly conceived as digital humanities or software studies. I focus in particular on critical approaches to understanding technology, through theoretical and philosophical work, so, for example, I have written a book called Philosophy of Software and I have a new book called Critical Theory and The Digital but I am also interested in the multiplicity of practices within computational culture as well, and the way the digital plays out in a political economic context.

KS: Today, here at the re:publica you talked about digital humanities. What do you associate with the term open science?

DB: Well, open science has very large resonances with Isaiah Berlin’s notion of the open society, and I think the notion of open itself is interesting in that kind of construction, because it implies a “good”. To talk about open science implies firstly that closed science is “bad”, that science should be somehow widely available, that everything is published and there is essentially a public involvement in science. It has a lot of resonances, not necessarily clear. It is a cloudy concept. 

KS: So where do you see the boundary between open science and digital humanities? Do they overlap or are they two separate fields? Is one part of the other?


DB: Yes, I think, as I was talking in the previous talk about how digital humanities should be understood within a constellation, I think open science should also be understood in that way. There is no single concept as such, and we can bring up a lot of different definitions, and practitioners would use it in multiple ways depending on their fields. But I think, there is a kind of commitment towards open access, the notion of some kind of responsibility to a public, the idea that you can have access to data and to methodologies, and that it is published in a format that other people have access to, and also there is a certain democratic value that is implicit in all of these constructions of the open: open society, open access, open science, etc. And that is really linked to a notion of a kind of liberalism that the public has a right, and indeed has a need to understand.  And to understand in order to be the kind of citizen that can make decisions themselves about science. So in many ways it is a legitimate discourse, it is a linked and legitimating discourse about science itself, and it is a way of presenting science as having a value to society.

KS:  But is that justified, do you agree with this concept? Or do you rather look at it critically?

DB: Well, I am a critical theorist. So, for me these kinds of concepts are never finished. They always have within them embedded certain kinds of values and certain kinds of positions. And so for me it is an interesting concept and I think “open science” is interesting in that it emerges at a certain historical juncture, and of course with the notion of a “digital age” and all the things that have been talked about here at the re:publica, everyone is so happy and so progressive and the future looks so bright – apparently…

KS: Does it?

DB: Yes, well, from the conference perspective, because re:publica is a technology conference, there is this whole discourse of progress – which is kind of an American techno-utopian vision that is really odd in a European context – for me anyway. So, being a critical theorist, it does not necessarily mean that I want to dismiss the concept, but I think it is interesting to unpick the concept and see how it plays out in various ways. In some ways it can be very good, it can be very productive, it can be very democratic, in other ways it can be used for example as a certain legitimating tool to get funding for certain kinds of projects, which means other projects, which are labelled “closed”, are no longer able to get funded. So, it is a complex concept, it is necessarily “good” or “bad”.

KS: So, not saying ‘good’ or ‘bad’, but looking at the dark side of say openness, where do you see the limits? Or where do you see problem zones?

DB: Well, again, to talk about the “dark side,” it is kind of like Star Wars or something. We have to be very careful with that framework, because the moment you start talking about the dark side of the digital, which is a current, big discussion going on, for example, in the dark side of the digital humanities, I think it is a bit problematic. That is why thinking in terms of critique is a much better way to move forward. So for me, what would be more interesting would be to look at the actual practices of how open science is used and deployed. Which practitioners are using it? Which groups align themselves with it? Which policy documents? And which government policies are justified by rolling back to open science itself? And then, it is important to perform a kind of genealogy of the concept of “open science” itself. Where does it come from? What is it borrowing from? Where is the discussion over that term? Why did we come to this term being utilised in this way? And I think that then shows us the force of a particular term, and places it within an historical context. Because open science ten years ago may have meant one thing, but open science today might mean something different. So, it is very important we ask these questions.

KS: All right. And are there any open science projects that come to mind, spontaneously, right now?


DB: I’m not sure they would brand themselves as “open science” but I think CERN would be for me a massive open science project, and which likes to promote itself in these kinds of ways. So, the idea of a public good, publishing their data, having a lot of cool things on their website the public can look at, but ultimately, that justification for open science is disconnected because, well, what is the point of finding the Higgs Boson, what is the actual point, where will it go, what will it do? And that question never gets asked because it is open science, so the good of open science makes it hard for us to ask these other kinds of questions. So, those are the kinds of issues that I think are really important. And it is also interesting in terms of, for example, there was an American version of CERN which was cancelled. So why was CERN built, how did open science enable that? I mean, we are talking huge amounts of money, large amounts of effort, would this money have been better transferred to solving the problem of unemployment, you know, we are in a fiscal crisis at the moment, a financial catastrophe and these kinds of questions get lost because open science itself gets divorced from its political economic context.

KS: Yes. But interesting that you say that within open science certain questions are maybe not that welcome, so actually, it seems to be at certain places still pretty closed, right?

DB: Well, that is right, open itself is a way of closing down other kinds of debates. So, for example, in the programming world open source was promoted in order not to have a discussion about free software, because free software was just too politicised for many people. So using the term open, it was a nice woolly term that meant everything to a lot of different people, did not feel political and therefore could be promoted to certain actors, many governments, but also corporations. And people sign up to open source because it just sounds – “open source, yes, who is not for open source?” I think if you were to ask anyone here you would struggle to find anybody against open source. But if you ask them if they are for free software a lot of people would not know what it is. That concept has been pushed away. I think the same thing happens in science by these kinds of legitimating discourses. Certain kinds of critical approaches get closed down. I think you would not be welcomed if at the CERN press conference for the Higgs boson you would put up your hand and ask: “well actually, would it not have been better spending this money on solving poverty?” That would immediately not be welcomed as a legitimate line of questioning.  

KS: Yes, right. Okay, so do you think science is already open, or do we need my openness? And if so, where?

DB: Well, again, that is a strange question that assumes that I know what “open” is. I mean openness is a concept that changes over time. I think that the project of science clearly benefits from its ability to be critiqued and checked, and I do not necessarily just want to have a Popperian notion of science here – it is not just about falsification – but I think verification and the ability to check numbers is hugely important to the progress of science. So that dimension is a traditional value of science, and very important that it does not get lost. Whether or not rebranding it as open science helps us is not so straightforward. I am not sure that this concept does much for us, really. Surely it is just science? And approaches that are defined as “closed” are perhaps being defined as non-science.

KS: What has the internet changed about science and working in research?

DB: Well, I am not a scientist, so –   

KS: – as in science, as in academia. Or, what has the internet changed in research?

DB: Well, this is an interesting question. Without being too philosophical about it I hope, Heidegger was talking about the fact that science was not science anymore, and actually technology had massively altered what science was. Because science now is about using mechanisms, tools, digital devices, and computers, in order to undertake the kinds of science that is possible. So it becomes this entirely technologically driven activity. Also, today science has become much more firmly located within economic discourse, so science needs to be justified in terms of economic output, for example. It is not just the internet and the digital that have introduced this, there are larger structural conditions that I think that are part of this. So, what has the Internet or the web changed about science? One thing is allowing certain kinds of scientism to be performed in public. And so you see this playing out in particular ways, certain movements – really strange movements – have emerged that are pro-science and they just seek to attack people they see as anti-science. So, for example, the polemical atheist movement led by Richard Dawkins argues that that it is pro-science and anyone who is against it is literally against science – they are anti-science. This is a very strange way of conceptualising science. And some scientists I think are very uncomfortable with the way Dawkins is using rhetoric, not science, to actually enforce and justify his arguments. And another example is the “skeptics” movement, another very “pro-science” movement that has very fixed ideas about what science is. So science becomes a very strong, almost political philosophy, a scientism. I am interested in exploring how digital technologies facilitate a technocratic way of thinking: a certain kind of instrumental rationality, as it were.

KS: How open is your research, how open is your work? Do you share your work in progress with your colleagues?

DB: Well, as an academic, sharing knowledge is a natural way of working – we are very collaborative, go to conferences, present new work all the time, and publish in a variety of different venues. In any case, your ability to be promoted as an academic, to become a professor, is based on publishing, which means putting work out there in the public sphere which is then assessed by your colleagues. So the very principles of academia are about publishing, peer review, and so on and so forth. So, we just have to be a bit careful about the framing of the question in terms of: “how ‘open’ is your work?”, because I am not sure how useful that question is inasmuch as it is too embedded within certain kinds of rhetorics that I am a little bit uncomfortable with. So the academic pursuit is very much about sharing knowledge – but also knowledge being shared.

KS: Okay. I was referring to, of course, when you do work and when you have completed your research you want to share it with others because that is the point of doing the research in the first place, to find something out and then to tell the world look this is what I found out, right?

DB: Possibly. No.

KS: No?

DB: This is what I am saying. I mean –

KS: I mean, of course in a simplified way.

DB: Well, disciplines are not there to “tell the world”. Disciplines are there to do research and to create research cultures. What is the point of telling the world? The world is not necessarily very interested. And so you have multiple publics – which is one way of thinking about it. So one of my publics, if you like, is my discipline, and cognate disciplines, and then broader publics like re:publica and then maybe the general public. And there are different ways of engaging with those different audiences. If I was a theoretical physicist for example, and I publish in complex mathematical formulae,  I can put that on the web but you are not really going to get an engagement from a public as such. That will need to be translated. And therefore maybe you might write a newspaper article which translates that research for a different public. So, I think it is not about just throwing stuff on the web or what have you. I think that would be overly simplistic. It is also about translation. So do I translate my research? Well I am doing it now. I do it all the time. So, I talk to Ph.D. students and graduates, that is part of the dissemination of information, which is, I think really what you are getting at. How do you disseminate knowledge?

KS: Exactly. And knowledge referring not only to knowledge that is kind of settled and finished, you know, I have come to this conclusion, this is what I am sharing, but also knowledge that is in the making, in the process, that was what I was referring to.

DB: Sure, yes. I mean, good academics do this all the time. And I am talking particularly about academia here. I think good academics do research and then they are teaching and of course these two things overlap in very interesting ways. So if you are very lucky to have a good scholar as a professor you are going to benefit from seeing knowledge in the making. So that is a more general question about academic knowledge and education. But the question of knowledges for publics, I think that is a different question and it is very, very complex and you need to pin down what it is you want to happen there. In Britain we have this notion of the public engagement of science and that is about translation. Let’s say you do a big research project that is very esoteric or difficult to understand, and then you write a popular version of it – Stephen Hawking is a good example of this – he writes books that people can read and this has major effects beyond science and academia itself. I think this is hugely important, both in terms of understanding how science is translated, but also how popular versions of science may not themselves be science per se.

KS: So, what online tools do use for your research?

DB: What online tools? I do not use many online tools as such. I mean I am in many ways quite a traditional scholar, I rely on books – I will just show you my notes. I take notes in a paper journal and I write with a fountain pen which I think is a very traditional way of working. The point is that my “tools” are non-digital, I hardly ever digitise my notes and I think it is interesting to go through the medium of paper to think about the digital, because digital tools seem to offer us solutions and we are very caught up in the idea that the digital provides answers. I think we have to pause a little bit, and paper forces you to slow down – that is why I like it. It is this slowing down that I think is really important when undertaking research, giving time to think by virtue of making knowledge embodied. Obviously, when it comes to collecting data and following debates I will use digital tools. Google of course is one of the most important, Google scholar and social media are really interesting tools, Gephi is very interesting social network analysis tool. I use Word and Excel as does pretty much everybody else. So the important issue is choosing which digital tools to use in which contexts. One thing I do much less of is, for example, the kind of programming were people write APIs and scrapers and all this kind of approaches, I have been involved in some projects doing that but I just do not have time to construct those tools, so I sometimes other people’s software (such as digital methods tools).

Notes, reproduced in Lewandowska and Ptak (2013)


KS: Okay, and how about organising ideas, do you do that on paper? Or for example do you use a tool for task managing?

DB: Always paper. If you have a look in my journal you can see that I can choose any page and there is an organisation of ideas going on here. For me it is a richer way to work through ideas and concepts  Eventually, you do have to move to another medium – you know I do not type my books on typewriters! – I use a word processor, for example. So eventually I do work on a computer, but by that point I think the structure is pretty much in my head but mediated through paper and ink – the computer is therefore an inscription device at the end of thinking. I dwell on paper, as it were, and then move over into a digital medium. You know, I do not use any concept mapping softwares, I just find them too clumsy and too annoying actually. 

KS: Okay, so what puts you off not using / not being tempted by using all those tools that offer you help and offer to make you more productive?

DB: Well, because firstly, I do not want to be more productive, and secondly I do not think they help. So the first thing I tell my new students, including new Ph.D. students, is: buy a note book and a pen and start taking notes. Do not think that the computer is your tool, or your servant. The computer will be your hindrance, particularly in the early stages of a Ph.D. It is much more important to carefully review and think through things. And that is actually the hardest thing to do, especially in this world of tweets and messages and emails – distractions are everywhere. There are no tweets in my book, thankfully, and it is the slowness and leisureliness that enables me to create a space for thinking. It is a good way of training your mind to pause and think before responding.

KS: So, you are saying that online tools kind of distract us from thinking and actually we think that we are doing a lot of stuff but actually we are not doing that much, right?

DB: Well, the classic problem is students that, for example, think they are doing an entirely new research project and map it all out in a digital tool that allows you to do fancy graphs, etc. – but they are not asking any kind of interesting research questions because they have not actually looked at the literature and they do not know the history of their subject. So it is very important that we do this, indeed some theorists have made the argument that we are forgetting our histories. And I think this is very true. The temptation to be in the future, to catch the latest wave or the latest trend affects Ph.D. students and academics as much as everybody else. And there are great dangers from chasing those kinds of solutions. Academia used to be about taking your time and being slow and considering things. And I think in the digital age academia’s value is that it can continue to do that, at least I hope so.

KS: Okay, but is there not a danger that if you say: okay, I am taking my time, I am taking my paper and my pen while others are hacking away, being busy using all those online tools, and in a way you could say okay that speeds up some part of research, at least when you draw out the cumulative essence of it, can you afford to invest the time?

DB: Well, it is not either or. It is both. The trouble is, I find anyway, with Ph.D. students, their rush to use the digital tools is to prevent them from having to use the paper. And, a classic example of this is Endnote. Everybody rushes to use Endnote because they do not like doing bibliographies. But actually, doing the bibliography by hand is one of the best things you can do because you learn your field’s knowledge, and you immediately recognise names because you are the one typing them in. Again this is a question of embodiment. When you leave that to a computer program to do it for you, laziness emerges – and you just pick and choose names to scatter over your paper. So, I am not saying you should not use such tools, I am saying that you should maybe do both. I mean, I never use these tools to construct bibliographies, I do them by hand because it encourages me to think through, what about this person are they really contributing, what do they add? And I think that is really important.

KS: Although, it probably should be more about, okay what do I remember this persons writing, and what have they contributed and not so much about whose name sounds fancy and which names do I need to drop here.

DB: Totally. Well, there has been some interesting work on this. Researchers have undertaken bibliometric analysis to show how references are used in certain disciplines and how common citations crop up again and again because they were used in previous papers and researchers feel the need to mention them again – so it becomes a name-checking exercise. Interestingly, few people go back and read these original canonical papers. So it is really important to read early work in a field, and place it within an historical context and trajectory, if one is to make sense of the present.

KS: A last question, I want to ask you about collaborative writing, do you write with other people and if so, how does that work? Where do you see advantages and where do you see possible trouble?

DB: Yes, I do. I have been through the whole gamut of collaborative writing, so I have seen both the failures and the successes. Collaborative writing is never easy, first and foremost. Particularly I think for humanities’ academics, because we are taught and we are promoted on the basis of our name being on the front of a paper or on the cover of a book. This obviously adds its own complications, plus you know academics tend to be very individualistic, and there is always questions about –

KS: …in spite of all the collaboration, right?


DB: Indeed, yes of course, I mean that is just the academic way, but I think you need that, because writing a book requires you to sit in a room for months and months and months and the sun is shining, everyone else having fun and you are sitting there in a gloomy room typing away, so you need that kind of self-drive and belief, and that, of course, causes frictions between people. So I have tried various different methods of working with people, but one method I found particularly interesting is a method called booksprinting. It is essentially a time-boxed process where you come together with, let us say, four or five other scholars, you are locked in a room for the week (figuratively speaking!), except to sleep and you eat together, write together, concept map and develop a book, collaboratively. And then the book that is produced is jointly authored, there is no arguments over that, if you do not agree you can leave, but the point is that the collaborative output is understood and bought into by all the participants. Now, to many academics this sounds like absolute horror, and indeed when I was first asked if I would like to be involved I was sceptical – I went along but I was sure this was going to be a complete failure. However it was one of the most interesting collaborative writing processes I have been involved in. I have taken part in two book sprints to date (three including 2014). You are welcome to have a look at the first book, it is called New Aesthetic New Anxieties. It is amazing how productive those kinds of collaborative writing processes can be. But it has to be a managed process. So, do check out booksprinting, it is very interesting – see also Imaginary museums, Computationality & the New Aesthetic and On Book Sprints.

KS: Okay, but then for that to work what do you actually / from your experience, can you draw out factors that make it work?

DB: Sure. The most important factor is having a facilitator, so someone who does not write. And the facilitators role is to make sure that everybody else does write.  And that is an amazing ability, a key person, because they have to manage difficult people and situations – it is like herding cats. Academics do not like to be pushed, for example. And the facilitator I have worked with, he is very skilled at this kind of facilitation. The second thing is the kinds of writing that you do and how you do it. The booksprinting process I have been involved in has been very paper-based, so again there is a lot of paper everywhere, there are post-it notes, there is a lot of sharing of knowledge, and this is probably the bit you are going to find interesting: There is, nonetheless, a digital tool which enables you to write collaboratively. It is a cleverly written tool, it has none of the bells and whistles, it is very utilitarian and really focuses the writing process and working together. And, having seen this used out on two different booksprints, I can affirm that it does indeed help the writing process. I recommend you have a look.

KS: So, what is the tool?

DB: It is called Booktype. And Adam Hyde is the facilitator who developed the process of Book Sprints, and is also one of the developers of the software.

KS: Okay, interesting. Any questions? Or any question I did not ask you, anything you want to add that we have missed out, any final thoughts? Any questions for me?

DB: Yes, I do think that a genealogy of “open science” is important and your questions are really interesting because they are informed by certain assumptions about what open science is. In other words, there is a certain position you are taking which you do not make explicit, and which I find interesting. So it might be useful to reflect on how “open science” needs to critically unpacked further.

KS: Okay, great, thank you very much.

DB: My pleasure.

KS: Thanks.

DB: Thank you.






Interview archived at Zenodo. Transcript corrected from the original to remove errors and clarify terms and sentences. 
Advertisements

Digital Breadcrumbs

In April 2013, the world population was 7,057,065, 162 (Hunt 2013). This is a population that increasing accesses and uses communications and digital media, and creates huge quantities of real-time and archived data, although it remains divided in its access to digital technology (Berry 2011). We often talk about the vast increase in data creation and transmission but it is sometimes difficult to find recent and useful quantitative measures of the current contours of digital media. Indeed, the internet as we tend to think of it, has become increasingly colonised by massive corporate technology stacks. These companies, Google, Apple, Facebook, Amazon, Microsoft, are called collectively “the Stacks” (Berry 2013). Helpfully, the CIA’s chief technology officer, Ira Hunt (2013), has listed the general data numbers for the “stacks” and gave some useful comparative numbers in relation to telecoms and SMS messaging (see figure 1).


Data Provider

Quantitative Measures

Google (2009 Stats from SEC filing)
More than 100 petabytes of data.

One trillion indexed URLS. 
Three million servers. 
7.2 billion page-views per day.

Facebook (August 2012)
More than one billion users in August 2012.

300 petabytes of data. more than 500 terrabytes per day. 
Holds 35% of the world’s photographs.

Youtube (2013)
More than 1000 petabytes of data (1 exabyte).

More than 72 hours of video uploaded per minute. 
37 million hours per year. 
4 billion views per day.

Twitter (2013)
More than 124 billion tweets per year.

390 million tweets per day or ~4500 tweets per second.

Global Text Messaging (2013)

More than 6.1 trillion text messages per year. 
193,000 messages sent per second 
or 876 per person per year

US Cell Calls (2013)
More than 2.2 trillion minutes per year. 
19 minutes per person per day. 
Uncompressed telephone data is smaller in 
size than Youtube data in a year.

figure 1: Growth in Data Collections and Archives (adapted from Hunt 2013)

The CIA have a particular interest in big data and growth in the “digital breadcrumbs” left by digital devices. Indeed, they are tasked with security of the United States and have always had an interest in data collection and analysis, but it is fascinating to see how increasingly the value of data comes to shape the collection of SIGINT which is digital and subject to computational analysis. Hunt argued,

“The value of any piece of information is only known when you can connect it with something else that arrives at a future point in time… Since you can’t connect dots you don’t have, it drives us into a mode of, we fundamentally try to collect everything and hang on to it forever” (Sledge 2013)

It is also interesting to note the implicit computationality that shapes and frames the way in which intelligence is expected to develop due to the trends in data and information growth. Nevertheless, these desires shape not just the CIA or other security services, but any organisation that is interested in using archival and real-time data to undertake analysis and prediction based on data – which is increasingly all organisations in a computational age.

Information has time value, and soon can lose its potency. This drives the growth of not just big data, but real-time analysis – particularly where real-time and archival or databases can be compared and processed in real-time. Currently real-time is a huge challenge for computational systems and pushes at the limits of current computal systems and data analytic tools. Unsurprisingly, new levels of expertise are called for, usually grouped under the notion of “data science”, a thoroughly interdisciplinary approach sometimes understood as the movement from “search” to “correlation”. Indeed, as Hunt argues,

“It is really very nearly within our grasp to be able to compute on all human generated information,” Hunt said. After that mark is reached, Hunt said, the [CIA] agency would also like to be able to save and analyze all of the digital breadcrumbs people don’t even know they are creating (Sledge 2013).

In a technical sense the desire in these “really big data” applications is the move from what is called “batch map/reduce”, such as represented by Hadoop and related computational systems to “real-time map/reduce” whereby real-time analytics are made possible, represented currently by technologies like Google’s Dremel (Melnik et al 2010), Caffeine (Higgenbotham 2010), Impala (Brust 2012), Apache Drill (Vaughan-Nichols 2013), Spanner (Iqbal 2013), etc. This is the use of real-time stream processing combined with complex analytics and the ability to manage large historical data sets. The challenges for the hardware are considerable, requiring peta-scale RAM architectures so that the data can be held in memory, but also the construction of huge distributed memory systems enabling in-memory analytics (Hunt 2013).


Traditional Computer Processing

Real-Time Analytics/Big Data
Data on storage area network (SAN) Data at processor
Move data to question Move question to data
Backup Replication Management
Vertical Scaling Horizontal Scaling
Capacity after demand Capacity ahead of demand
Disaster recovery Continuity of Operations Plan (COOP)
Size to peak load Dynamic/elastic provisioning
Tape storage area network (SAN)
storage area network (SAN) disk
disk solid-state disk
RAM limited Peta-scale RAM

figure 2: Tectonic Technology Shifts (adapted from Hunt 2013)

These institutional demands are driving the development of new computing architectures, which have principles associated with them, such as: data close to compute, power at the edge, optical computing/optical bus, the end of the motherboard and the use of shared pools of everything, new softwarized hardware systems that allow compute, storage, networking, and even the entire data centre to be subject to software control and management (Hunt 2013). This is the final realisation of the importance of the network, and shows the limitations of current network technologies such that they become one of the constraints on future softwarized system growth.

This continues the move towards context as the key technical imaginary shaping the new real-time streaming digital environment (see Berry 2012), with principles such as “Schema on Read”, which enables the data returned to be shaped in relation to the context of the question asked, “user assembled analytics”, which requires answers to be given for a set of research questions, and the importance of elastic computing, which enables computing power to be utilised in reference to a query or processing demand in real-time, similar to the way electricity is drawn from in greater proportions from the mains as it is required.

These forces are combining in ways that are accelerating the pace of data collection, whether from data exhausts left by users, or through open-source intelligence that literally vacuums the data from the fibre-optic cables that straddle the globe. As such, they also raise important questions related to the form of critical technical practices that are relevant to them and how we can ensure that citizens remain informed in relation to them. To take one small example, the mobile phone is now packed with real-time sensors which is constantly monitoring and processing contextual information about its location, use and the activities of its user. This data is not always under the control of the user, and in many cases is easily leaked, hacked or collected by third parties without the understanding or consent of the user (Berry 2012).

The notion that we leave behind “digital breadcrumbs”, not just on the internet, but across the whole of society, the economy, culture and even everyday life is an issue that societies are just coming to terms with. Notwithstanding the recent Snowdon revelations (see Poitras et al 2013), new computational techniques, as outlined in this article, demonstrate the disconnect between people’s everyday understanding of technology, and its penetration of life and the reality of total surveillance. Not just the lives of others are at stake here, but the very shape of public culture and the ability for individuals to make a “public use of reason” (Kant 1784) without being subject to the chilling effects of state and corporate monitoring of our public activities. Indeed, computal technologies such as these described have little respect for the public/private distinction that our political systems have naturalised as part of a condition of possibility for political life at all. This makes it ever more imperative that we provide citizens with the ability to undertake critical technical practices, both in order to choose how to manage the digital breadcrumbs they leave as trails in public spaces, but also to pull down the blinds on the post-digital gaze of state and corporate interests through the use of cryptography and critical encryption practices.

Bibliography

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave.

Berry, D. M (2012) The social epistemologies of software, Social Epistemology, 26 (3-4), pp. 379-398. ISSN 0269-1728

Berry, D. M. (2013) Signposts for the Future of Computal Media, Stunlaw, accessed 14/10/2013, http://stunlaw.blogspot.co.uk/2013/08/signposts-for-future-of-computal-media.html

Brust, (2012) Cloudera’s Impala brings Hadoop to SQL and BI, accessed 14/10/2013, http://www.zdnet.com/clouderas-impala-brings-hadoop-to-sql-and-bi-7000006413/

Higgenbotham, S. (2013) How Caffeine Is Giving Google a Turbo Boost, accessed 14/10/2013, http://gigaom.com/2010/06/11/behind-caffeine-may-be-software-to-inspire-hadoop-2-0/

Hunt, I. (2013) The CIA’s “Grand Challenges” with Big Data, accessed 14/10/2013,  http://new.livestream.com/gigaom/structuredata/videos/14306067

Iqbal, M. T. (2013) Google Spanner : The Future Of NoSQL, accessed 14/10/2013,  http://www.datasciencecentral.com/profiles/blogs/google-spanner-the-future-of-nosql

Kant, I. (1784) What Is Enlightenment?, accessed 14/10/2013, http://www.columbia.edu/acis/ets/CCREAD/etscc/kant.html

Melnik, S., Gubarev, A., Long, J. J., Romer, G., Shivakumar, S., Tolton, M. and Vassilakis, T. (2010) Dremel: Interactive Analysis of Web-Scale DatasetsProc. of the 36th Int’l Conf on Very Large Data Bases (2010), pp. 330-339.

Poitras, L., Rosenbach, M., Schmid, F., Stark, H. and Stock, J. (2013) How the NSA Targets Germany and Europe, Spiegel, accessed 02/07/2013, http://www.spiegel.de/international/world/secret-documents-nsa-targeted-germany-and-eu-buildings-a-908609.html

Sledge, M. (2013) CIA’s Gus Hunt On Big Data: We ‘Try To Collect Everything And Hang On To It Forever’, accessed 14/10/2013, http://www.huffingtonpost.com/2013/03/20/cia-gus-hunt-big-data_n_2917842.html

Vaughan-Nichols, S. J. (2013) Drilling into Big Data with Apache Drill, accessed 14/10/2013, http://blog.smartbear.com/open-source/drilling-into-big-data-with-apache-drill/

The New Aesthetic: A Maieutic of Computationality

Screen testing at main stage for the Republican convention in Tampa, Fla (2012)

Many hasty claims are now being made that the new aesthetic is over, finished, or defunct. I think that as with many of these things we will have to wait and see to the extent to which the new aesthetic is “new”, an “aesthetic”, used in practice, or has any trajectory associated with it. For me, the responses it generates are as interesting as the concept of the new aesthetic itself.

And regarding the “remembering” (perhaps, territorialization) of new media and previous practices, let’s not forget that forgetting things (deteritorialization) can be extremely productive, both theoretically and in everyday practice (as elpis, perhaps, if not as entelechy of new generations). Indeed, forgetting can be like forgiving,[1] and in this sense can allow the absorption or remediation of previous forms (a past bequeathed by the dead) that may have been contradictory or conflictual to be transcended at a higher level (this may also happen through a dialectical move, of course).[2] This is, then, a politics of memory as well as an aesthetic.

But the claim that “NA is that it seems to be all gesture and no ideology” is clearly mistaken. Yes, NA is clearly profoundly gestural and is focused on the practice of doing, in some sense, even if the doing is merely curatorial or collecting other things (as archive/database of the present). The doing is also post-human in that algorithms and their delegated responsibility and control appears to be a returning theme (as the programming industry, as logics of military colonisation of everyday life, as technical mediation, as speed constitutive of absolute past, or as reconstitution of knowledge itself). It is also ideological to the extent that is an attempt to further develop a post-human aesthetic (and of course, inevitably this will/may/should end in failure) but nonetheless reflects in interesting ways a process of cashing out the computational in the realm of the aesthetic – in some senses a maieutic of computational memory, seeing and doing (a “remembering” of glitch ontology or computationality).

As to the charge of the inevitability of historicism to counter the claims of the new aesthetic, one might wish to consider the extent to which the building of the new aesthetic may share the values of computer science (highly ideological, I might add) and which is also profoundly ahistorical and which enables the delegation of the autonomy of the new aesthetic (as code/software) as a computational sphere. But this is not to deny the importance of critical theory here, far from it, but rather it is to raise a question about computation’s immunity to the claims that critical approaches inevitably make – as Ian Bogost recently declared (about a different subject), are these not just “self-described radical leftist academics” and their “predictable critiques”. Could not the new aesthetics form an alliance here with object-oriented ontology?

Within this assemblage, the industrialisation of programming and memory becomes linked to the industrialisation of “seeing” (and here I am thinking of mediatic industries). What I am trying to gesture towards, if only tentatively, is that if the new aesthetic, as an aesthetic of the radically autonomous claims of a highly computational post-digital society, might format the world in ways which profoundly determine, if not offer concrete tendencies, towards an aesthetic which is immune to historicism – in other words the algorithms aren’t listening to the humanists – do we need to follow Stephen Ramsay’s call for Humanists to build?

Here I point to both the industrialisation of memory but also the drive towards a permanent revolution in all forms of knowledge that the computational industries ceaselessly aim towards. That is, the new aesthetic may be a reflexive sighting (the image, the imaginary, the imagined?) and acknowledgement of the mass-produced temporal objects of the programming industries, in as much as they are shared structures, forms, and means, that is, algorithms and codes, that construct new forms of reception in terms that consciousness and collective unconsciousness will increasingly correspond.

Notes

[1] “Forgiving is the only reaction which does not merely re-act but acts anew and unexpectedly, unconditioned by the act which provoked it and therefore freeing from its consequences both the one who forgives and the one who is forgiven” (Hannah Arendt, The Human Condition, page 241), “and if he trespass against thee… and… turn against to thee, saying, I changed my mind; thou shalt release him” (Luke 17: 3-4)
[1] Here I am thinking in terms of Mannheim’s concept of “Generation Entelechy” and “Generation Unit” to consider the ways in which the quicker the tempo of social cultural change, here understood as represented through digital technology, the greater the chances that a particular generation location’s group will react to changed circumstances by producing their own entelechy. 

Coping Tests as a Method for Software Studies

In this post I want to begin to outline a method for software reading that in some senses can form the basis of a method in software studies more generally. The idea is to use the pragmata of code, combined with its implicit temporality and goal-orientedness to develop an idea of what I call coping tests. This notion draws from the idea developed by Heidegger, as “coping” being a specific means of experiencing that takes account of the at-handiness (zuhandenheit) of equipment (that is entities/things/objects which are being used in action)  – in other words coping tests help us to observe the breakdowns of coded objects. This is useful because it helps us to think about the way software/code is in some senses a project that is not just static text on a screen, but a temporal structure that has a past, a processing present, and a futural orientation to the completion (or not) of a computational task. I want to develop this in contrast to attempts by others to focus on the code through either through a heavily textual approach (and critical code studies tends towards this direction), or else a purely functionality driven approach (which can have idealist implications in some forms, whereby a heavily mathematised approach tends towards a platonic notion of form).

In my previous book, The Philosophy of Software (Berry 2011), I use obfuscated code as a helpful example, not as a case of unreadable reading, or even for the spectacular, rather I use it as a stepping off point to talk about the materiality of code through the notion of software testing. Obfuscated code is code deliberately written to be unreadable to humans but perfectly readable to machines. This can take the form of a number of different approaches, from simply mangling the text (from a human point of view), to using distraction techniques, such as confusing or deliberately mislabeling variables, functions, calls, etc. It can even take the form of aesthetic effects, like drawing obvious patterns, streams, and lines in the code, or forming images through the arrangement of the text.

Testing is a hugely important part of the software lifecycle and links the textual source code to the mechanic software and creates the feedback cycle between the two. This I linked to Callon and Latour (via Boltanski and Thevenot) use of the notion of ‘tests’ (or trials of strength) – implying that it is crucially the running of these obfuscated code programs that shows that they are legitimate code (they call these legitimate tests), rather than nonsense. The fact that they are unreadable by humans and yet testable is very interesting, more so as they become aesthetic objects in themselves as the programmers start to create ASCII art both as a way of making the code (unreadable), now readable as an image, but also adding another semiotic layer to the meaning of the code’s function.

The nature of coping that these tests imply (as trials of strength) combined with the mutability of code is then constrained through limits placed in terms of the testing and structure of the project-orientation. This is also how restrictions are delegated into the code which serve as what Boltanski and Thevenot can then be retested through ‘trials of strength’. The borders of the code are also enforced through tests of strength which define the code qua code, in other words as the required/tested coded object. It is important to note that these also can be reflexively “played with” in terms of clever programming that works at the borderline of acceptability for programming practices (hacking is an obvious example of this).

In other words testing as coping tests can be understood in two different modes, (i) ontic coping tests: which legitimate and approval the functionality and content of the code, in other words that the code is doing what it should, so instrumentally, ethically, etc. But we need to work and think at a number of different levels, of course, from unit testing, application testing, user interface testing, and system testing, more generally in addition to taking account of the context and materialities that serve as conditions of possibility for testing (so this could take the form of a number of approaches, including ethnographies, discursive approaches, etc.).; and (ii) ontological coping tests: which legitimate the code qua code, that it is code at all, for example, authenticating that the code is the code we think it is – we can think of code signing as an example of this, although it has a deeper significance as the quiddity of code. This then has a more philosophical approach towards how we can understand, recognise or agree on the status of code as code and identify underlying ontological structural features, etc.

For critical theory, I think tests are a useful abstraction as an alternative (or in addition to) the close reading of source code. This can be useful in a humanities perspective for teaching some notions of ‘code’ through the idea of ‘iteracy’ for reading code, and will be discussed throughout my new book, Critical Theory and the Digital, in relation to critical readings of software/code opened up through the categories given by critical theory. But this is also extremely important for contemporary critical researchers and student, who require a much firmer grasp of computational principles in order to understand the economy, culture and society which has become softwarized, but also more generally for the humanities today, where some knowledge of computation is becoming required to undertake research.

One of the most interest aspects of this approach, I think, is that it helps sidestep the problems associated with literally reading source code, and the problematic of computational thinking in situ as a programming practice. Coping tests can be developed within a framework of “depth” in as much as different kinds of tests can be performed by different research communities, in some senses this is analogous to a test suite in programming. For example, one might have UI/UX coping tests, functionality coping tests, API tests, forensic tests (linking to Matthew Kirschenbaum’s notion of forensic media), and even archaeological coping tests (drawing from media archaeology, and particularly theorists such as Jussi Parikka) – and here I am thinking both in terms of coping tests written in the present to “test” the “past”, as it were, but also there must be an interesting history of software testing, which could be reconceptualised through this notion of coping tests, both as test scripts (discursive) but also in terms of software programming practice more generally, social ontologies of testing, testing machines, and so forth.[1] We might also think about the possibilities for thinking in terms of social epistemologies of software (drawing on Steve Fuller’s work, for example).

As culture and society are increasingly softwarized, it seems to me that it is very important that critical theory is able to develop concepts in relation to software and code, as the digital. In a later post I hope to lay out a framework for studying software/code through coping tests and a framework/method with case studies (which I am developing with Anders Fagerjord, from IMK, Oslo University).

Notes

[1] Perhaps this is the beginning of a method for what we might call software archaeology. 

New Book: Life in Code and Software: Mediated life in a complex computational ecology

Life in Code and Software (cover image by Michael Najjar)

New book out in 2012 on Open Humanities PressLife in Code and Software: Mediated life in a complex computational ecology. 

 

This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. Life in Code and Software introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological, which I call computationality, and within which, code and software become the paradigmatic forms of knowing and doing. Such that other candidates for this role, such as: air, the economy, evolution, the environment, satellites, etc., are understood and explained through computational concepts and categories.

 

 

 

The New Bifurcation? Object-Oriented Ontology and Computation

Alan Turing

There are now some interesting challenges emerging to the philosophical systems described in object-oriented ontology, such as Alex Galloway’s recent piece, ‘A response to Graham Harman’s “Marginalia on Radical Thinking”’ and Christian Thorne’s, ‘To The Political Ontologists‘, as well as my own contribution, ‘The Uses of Object-Oriented Ontology‘.

Here, I want to tentatively explore the links between my own notion of computationality as ontotheology and how object-oriented ontology unconsciously reproduces some of these structural features that I think are apparent in its ontological and theological moments. In order to do this, I want to begin outlining some of the ways one might expect the ‘ontological moment’, as it were, to be dominated by computational categories and ideas which seem to hold greater explanatory power. In this regard I think this recent tweet by Robert Jackson is extremely revealing,

Robert Jackson (@Recursive_idiot)

04/06/2012 13:34

I think this Galloway / OOO issue can be resolved with computability theory. Objects / units need not be compatible with the state.

Revealing, too, are the recent discussions by members of object-oriented ontology and the importance of the computational medium for facilitating its reproduction – see Levi Bryant’s post ‘The Materiality of SR/OOO: Why Has It Proliferated?‘, and Graham Harman’s post ‘on philosophical movements that develop on the internet‘.

It is interesting to note that these philosophers do not take account of the possibility that the computational medium itself may have transformed the way in which they understand the ontological dimension of their projects. Indeed, the taken-for-granted materiality of digital media is clearly being referred to in relation to a form of communication theory – as if the internet were merely a transparent transmission channel – rather than seeing the affordances of the medium encouraging, shaping, or creating certain ways of thinking about things, as such.

Of course, they might respond, clearly the speed and publishing affordances allow them to get their messages out quicker, correct them, and create faster feedback and feedforward loops. However, I would argue that the computational layers (software, applications, blogs, tweets, etc.) also discipline the user/writer/philosopher to think within and through particular computational categories. I think it is not a coincidence that what is perhaps the first internet or born-digital philosophy has certain overdetermined characteristics that reflect the medium within which they have emerged. I am not alone in making this observation, indeed, Alexander Galloway has started to examine the same question, writing,

[T]he French philosopher Catherine Malabou asks: “What should we do so that consciousness of the brain does not purely and simply coincide with the spirit of capitalism?”….Malabou’s query resonates far and wide because it cuts to the heart of what is wrong with some philosophical thinking appearing these days. The basic grievance is this: why, within the current renaissance of research in continental philosophy, is there a coincidence between the structure of ontological systems and the structure of the most highly-evolved technologies of postfordist capitalism? I am speaking, on the one hand, of computer networks in general, and object-oriented computer languages (such as Java or C++) in particular, and on the other hand, of certain realist philosophers such as Bruno Latour, but also more pointedly Quentin Meillassoux, Graham Harman, and their associated school known as “speculative realism.” Why do these philosophers, when holding up a mirror to nature, see the mode of production reflected back at them? Why, in short, a coincidence between today’s ontologies and the software of big business? (Galloway, forthcoming, original emphasis)

He further argues:

Philosophy and computer science are not unconnected. In fact they share an intimate connection, and have for some time. For example, set theory, topology, graph theory, cybernetics and general system theory are part of the intellectual lineage of both object-oriented computer languages, which inherit the principles of these scientific fields with great fidelity, and for recent continental philosophy including figures like Deleuze, Badiou, Luhmann, or Latour. Where does Deleuze’s “control society” come from if not from Norbert Wiener’s definition of cybernetics? Where do Latour’s “actants” come from if not from systems theory? Where does Levi Bryant’s “difference that makes a difference” come from if not from Gregory Bateson’s theory of information? (Galloway, forthcoming).

Ian Bogost’s (2012) Alien Phenomenology is perhaps the most obvious case where the links between his computational approach and his philosophical system are deeply entwined, objects, units, collections, lists, software philosophy, carpentry (as programming) etc. Indeed, Robert Jackson also discusses some of the links with computation, making connections between the notion of interfaces and encapsulation, and so forth, in object-oriented programming in relation to forms of object-orient ontology’s notion of withdrawal, etc. He writes,

Encapsulation is the notion that objects have both public and private logics inherent to their components. But we should be careful not to regard the notion that private information is deliberately hidden from view, it is rather the unconditional indifference of objects qua objects. Certain aspects of the object are made public and others are occluded by blocking off layers of data. The encapsulated data can still be related to, even if the object itself fails to reveal it (Jackson 2011).

This, he argues, serves as a paradigmatic example of the object-oriented ontologists’ speculations about objects as objects. Therefore, a research project around object-oriented computational systems would, presumably, allow us to cast light on wider questions about other kinds of objects, after all, objects are objects, in the flat ontology of object-oriented ontology. In contrast, I would argue that it is no surprise that object-oriented ontology and object-oriented programming have these deep similarities as they are drawing from the same computational imaginary, or foundational ideas, about what things are or how they are categorised in the world, in other words a computational ontotheology – computationality.

The next move is the step that Alex Galloway makes, to link this to the wider capitalist order, postfordist or informational capitalism (what I would call Late Capitalism). He then explores how this ideological superstructure is imposed onto a capitalist mode of production, both to legitimate and to explain its naturalness or inevitability. Galloway argues,

(1) If recent realist philosophy mimics the infrastructure of contemporary capitalism, should we not show it the door based on this fact alone, the assumption being that any mere repackaging of contemporary ideology is, by definition, anti-scientific and therefore suspect on epistemological grounds? And (2) even if one overlooks the epistemological shortcomings, should we not critique it on purely political grounds, the argument being that any philosophical project that seeks to ventriloquize the current industrial arrangement is, for this very reason, politically retrograde? (Galloway, forthcoming).

He further writes,

Granted, merely identifying a formal congruity is not damning in itself. There are any number of structures that “look like” other structures. And we must be vigilant not to fetishize form as some kind of divination–just as numerology fetishizes number. Nevertheless are we not obligated to interrogate such a congruity? Is such a mimetic relationship cause for concern? Meillassoux and others have recently mounted powerful critiques of “correlationism,” so why a blindness toward this more elemental correlation?… What should we do so that our understanding of the world does not purely and simply coincide with the spirit of capitalism? (Galloway, forthcoming, original emphasis).

Galloway concludes his article by making the important distinction between materialism and realism, pointing out that materialism must be historical and critical whereas realism tends towards an ahistoricism. By historicising object-oriented ontology, we are able to discern the links between the underlying computational capitalism and its theoretical and philosophical manifestations.

Chales Darwin

More work needs to be done here to trace the trajectories that are hinted at, particularly the computationality I see implicit in object-oriented ontology and speculative realism more generally. But I also want to tentatively gesture towards object-oriented ontology as one discourse contributing to a new bifurcation (as Whitehead referred to the nature/culture split). In this case, not between nature and culture, which today have begun to reconnect as dual hybridised sites of political contestation – for example, climate change – but rather as computation versus nature-culture.

Where nature-culture becomes a site of difference, disagreement, political relativism and a kind of ‘secondary’ quality, in other words ‘values’ and ‘felicity conditions’. Computationality, or some related ontological form, becomes the site of primary qualities or ‘facts’, the site of objectivity, and is foundational, ahistorical, unchanging and a replacement for nature in modernity as the site of agreement upon which a polity is made possible – a computational society.

Here, the abstract nature of objects within object-oriented programming, formal objects which inter-relate to each other and interact (or not), and yet remain deeply computational, mathematical and discrete is more than suggestive of the flat ontology that object-oriented ontology covets. The purification process of object-oriented design/programming is also illustrative of the gradual emptying of the universe of ‘non-objects’ by object-oriented ontology, which then serves to create ontological weight, and the possibility of shared consensus within this new bifurcated world. This creates a united foundation, understood as ontological, a site of objectivity, facts, and with a strict border control to prevent this pure realm being affected by the newly excised nature-culture. Within this new bifurcation, we see pure objects placed in the bifurcated object-space and subjects are located in the nature-culture space – this is demonstrated by the empty litanies that object-oriented ontologists share and which describe abstract objects, not concrete entities. This is clearly ironic in a philosophical movement that claims to be wholly realist and displays again the anti-correlationist paradox of object-oriented ontology.

This ontological directive also points thought towards the cartography of pure objects, propositions on the nature of ‘angels’, ‘Popeye’ and ‘unicorns’, and commentary on commentary in a scholastic vortex through textual attempts to capture and describe this abstract sphere – without ever venturing into the ‘great outdoors’ that object-oriented ontologists claim to respect. What could be closer to the experience of contemporary capitalist experience than the digital mazes that are set up by the likes of Facebook and Google, to trap the user into promises of entertainment and fulfilment by moving deeper and deeper around the social ontologies represented in capitalist social networks, and which ultimately resolve in watching advertisements to fuel computational capitalism?

Galloway rightly shows us how to break this spell, reflected also in the object-oriented ontologists refusal to historicise, through a concrete analysis of the historical and material conditions of production, he writes:

One might therefore label this the postfordist response to philosophical realism in general and Meillassoux in particular: after software has entered history, math cannot and should not be understood ahistorically… math itself, as algorithm, has become a historical actor. (Galloway, forthcoming, original emphasis).

Bibliography

Bogost, I. (2012a) Alien Phenomenology: or What It’s Like To Be A Thing, Minnesota University Press.

Galloway, A. R. (forthcoming) The Poverty of Philosophy: Realism and Postfordism, copy supplied by the author.

Jackson, R. (2011) Why we should be Discrete in Public – Encapsulation and the Private lives of Objects, accessed 04/06/2012, http://robertjackson.info/index/wp-content/uploads/2011/01/Aarhus-presentation.pdf

Tagged , , , ,

The Uses of Object-Oriented Ontology

Object-oriented ontologists argue that we must no longer make the correlationist error of privileging the being of humans within ontology, instead moving to a ‘democracy of objects’ (see Bryant 2011). In this, they follow the other speculative realists in attempting to develop a notion of ‘flat ontology’. This flat ontology is one in which hierarchy is banished and therefore bears a striking resemblance to the universe described by science, albeit differing in not seeking reductionist explanations in terms of causation, etc. Nonetheless, there seems to be no World, in the Heideggerian sense, for the speculative realist, who, observing the relative position of philosophy vis a vis science within human culture, endeavors to replicate or supplement scientific inquiry without human culture, by providing a speculative and philosophical description of the universe through the notion of withdrawn or partially visible objects – Morton calls this ekphrasis or “ultra-vivid description” (Morton 2011: 170). That is, to refute the presumed correlationism of scientific practice. In most varieties of object-oriented ontology, therefore, I think that they are actually undertaking object-oriented onticology. That is, a position more interested in beings, rather than Being, something I discuss further below. For example, Ian Bogost (2012a) outlines a system of thought in which no object has precedence or hierarchy over another, and yet all share a commonality which, following Heidegger, Bogost calls being and we might understand as ‘objectness’ or ‘being an object’.[1] This suggests a revealing paradox raised by trying to place a general case (being) as equivalent to the particular (beings) within this flat ontology, and which is justified by virtue of the singularity of what he calls a ‘tiny ontology’ (Bogost 2012a: 22).

So, what is at stake in the project of object-oriented ontology – a philosophy whose readers consists of humans who are actively solicited? Indeed, as part of this project, object-oriented ontology seeks to convince the reader of her own experiential equality in relation to the quantitative variety of experiences of different beings within the universe, human and non-human (see Charlesworth 2012). This, of course, has political implications. Here, I want to explore how and why this group of self-defined ‘anti-correlationists’ work so hard at a rhetorical attempt to convince its readers as to the importance of the object-oriented ontology (OOO) project. We might also note that the term object-oriented philosophy has knowingly borrowed its label from object-oriented programming, a method of structured computer software design and programming. I suspect that there is an underlying and unconscious use of the assumptions of an ontotheology of computationality  (or glitch ontology) underlying “object-oriented ontology”, something I intend to return to more explicitly in a later article (but see Bogost 2009b for a related discussion of this; also Berry 2011).

Again, I think it is useful to turn to Ian Bogost’s work as he clearly outlines object-oriented ontology in Alien Phenomenology: or What It’s Like To Be A Thing.  This book is written to be widely read and Bogost has acknowledged as much on different fora. More so, its intended readership is clearly and unmistakably human.

We ought to think in public. We ought to be expanding our spheres of influence and inspiration with every page we write. We ought to be trying to influence the world, not just the blinkered group that goes to our favorite conference. And that principle ought to hold no matter your topic of interest, be it Proust or videogames or human factors engineering or the medieval chanson de geste. No matter your field, it can be done, and people do it all the time. They’re called “good books.”… And I’ve tried very hard as an author to learn how to write better and better books, books that speak to a broader audience without compromising my scholarly connections, books that really ought to exist as books (Bogost 2011; see also Bogost 2012: 88-91).

So, rather than asking what it is like to be a thing, I want to explore what is the use of knowing what it is to be a thing. In other words, we might ask what are the uses of object-oriented ontology? What are the practices of object-oriented ontologists, and how do they reflect upon their own, mostly discursive practices, and their relationships with ‘objects’?

Object-oriented ontology can be understood as a descriptive project for philosophy, which Bogost, following Harman, christens Ontography (Bogost 2012a: 36), a “name for a general inscriptive strategy, one that uncovers the repleteness of units [Bogost’s term for objects] and their interoperability” (Bogost 2012a: 38).[2] For Bogost, this project involves the creation of lists, a “group of items loosely joined not by logic or power or use but by the gentle knot of the comma”, he explains, “Ontography is an aesthetic set theory, in which a particular configuration is celebrated merely on the basis of its existence” (Bogost 2012a: 38).[3] Here we see why Bogost is keen to draw out the similarities to the creation of aesthetic collections in the New Aesthetic (see Berry 2012, Bogost 2012b). Drawing on Harman, Bogost describes why the “rhetoric of lists” is useful to a philosophical project:

Some readers may… dismiss them as an “incantation” or “poetics” of objects. But most readers will not soon grow tired, since the rhetorical power of these rosters of beings stems from their direct opposition to the flaws of current mainstream philosophy… The best stylistic antidote to this grim deadlock is a repeated sorcerer’s chant of the multitude of things that resist any unified empire (Harman quoted in Bogost 2012a: 39)

Whilst the claims of a “grim deadlock” or “current mainstream philosophy” remain undefined and unexamined, for Bogost making lists “hones a virtue: the abandonment of anthropocentric narrative coherence in favor of worldly detail” (Bogost 2012a: 42). An attempt, we might say, to get closer to the buzzing variety of the ‘real’. Further he explains, “Lists of objects without explication can do the philosophical work of drawing our attention towards them with greater attentiveness” (Bogost 2012a: 45). An ontograph, he claims, is a “crowd” (Bogost 2012a: 59). They are also, we might note in passing, extremely partial lists, reflecting the rhetorical intentions of the litany reciter and only a description in the weakest sense of the term (see appendix I below).[4]

Bogost attempts to circumvent this problem by the application of a method he calls carpentry, after Harman and Lingis who use the term to refer to the way in which “things fashion one another and the world at large” (Bogost 2012a: 93). Bogost introduces philosophical software carpentry to implement the creation of what he calls “ontographic tools to characterize the diversity of being” (Bogost 2012a: 94). Whilst I consider this a brilliant move by Bogost, I hesitate to label it as philosophy. One of these tools he calls the Latour Litanizer, which generates “random” litanies based on randomized selections of Wikipedia pages (although it doesn’t appear to have been used within Alien Phenomenology itself, which has a constant refrain in the choice of items in the litanies, see Appendix, below). Whilst an interesting example of software litany creation, it is hardly divorced from its programmer (see Berry 2011). This is further demonstrated in the example of the “image toy” that selected random photographs from the Flickr website, and occasionally therefore showed images of women, one of which was in a playboy bunny suit. In response to some criticism, Bogost was required to hand-code a specific query that prevented the operation of certain aspects of philosophical software carpentry, namely no women in bunny suits, defined in the code as:

Options.Tags = “(object OR thing OR stuff) AND NOT (sexy OR woman OR girl)”

I am working through Ian Bogost’s (2012) work as a representative example of object-oriented ontology and allow it to stand in for the varieties of speculative realism. Whilst acknowledging some significant differences in the content of their philosophical systems, the general form of their argument seems to me to remain fairly consistent, claiming that philosophy made a catastrophic error following Kant into correlationism – the mistaken belief in the importance of the human as a co-constructor of knowledge and understanding. I want to challenge this claim on two grounds, one a performative contradiction in relation to the selection of intended readers capable of being influenced by the persuasive discourse of object-oriented ontology. Secondly, on the basis of what I perceive to be an unexamined formalism which is implicit in the construction of the speculative realist philosophical system. Both of these I believe are highly damaging to the claims of the speculative realist position, but the second criticism points towards a potential political conservatism at work within the project of speculative realism more generally. These are not the only weaknesses in the object-oriented ontology position, but I think they are significant enough to warrant discussion.

One striking aspect to the project outlined within Alien Phenomenology, is the aim towards a phenomenological practice. Bogost writes, “As philosophers, our job is to amplify… the noise of objects… Our job is to write the speculative fictions of their processes, of their… operations… Our job is to get our hands dirty…” (Bogost 2012a: 34). In contrast to Marx’s dictum that philosophers have hitherto tried to understand the world, and that philosophers should therefore aim to change it, Bogost proposes that we should describe it or create other actors to describe it for us, by making philosophical software (see Bogost 2012a: 110). As Bogost himself notes,

“Why do we give the Civil War soldier, the guilty Manhattan project physicist, the oval-headed alien anthropomorph, and the intelligent celestial race so much more credence than the scoria cone, the obsidian fragment, the gypsum crystal, the capsicum pepper, and the propane flame? When we welcome these things into scholarship, poetry, science, and business, it is only to ask how they relate to human productivity, culture, and politics. We’ve been living in a tiny prison of our own devising, one in which all the stuff that concerns us are the fleshy beings that are our kindred and the stuffs with which we stuff ourselves” (Bogost 2012a: 3, emphasis added).

Putting to one side the somewhat doubtful claim that the former litany is given more credence by anyone except, perhaps, humanities scholars, here we see a claim to a collective ‘we’ that Bogost wishes to speak for and to. Further, he adds, “Let me be clear: we need not discount human beings to adopt an object-oriented position – after all, we ourselves are of the world as much as musket buckshot and gypsum and space shuttles. But we can no longer claim that our existence is special as existence” (Bogost 2012a: 8).

Indeed, if we were to take this claim seriously then one would be driven to wonder why Bogost is writing his book at all, but of course, “musket buckshot and gypsum and space shuttles” cannot be the addressees of this text as patently they do not read. So object-oriented ontology (OOO) is trying to do two things here, on the one hand deny the specialness of humans’ existence in relation to other objects, whilst simultaneously having to write for them and to make arguments supporting their claims – thereby acknowledging the very special existence that humans possess, namely qualities of understanding, taking a stand on their own being, etc. This is a classic performative contradiction. Whilst it would be perfectly legitimate to outline a formalist theory or methodological position that, for the sake of the approach, limits the requirement to treat human actors as particular or special in relation to others (this is the methodological innovation within Actor-Network Theory), it is quite another to then extend this claim into a philosophical system which is part of a special order of discourse particular to human beings, that is, philosophy. This so-called philosophical non-human turn, is interesting for its nihilistic and conservative implications, something we now turn to in detail.

For his part, Bogost (2012a) rejects that nihilism is present in his work, remarking,

[object-oriented ontology] “allows for the possibility of a new sort of humanism,” in which, as Harman adds, “humans will be liberated from the crushing correlational system.” For his part, Nick Srnicek offers opprobrium in place of optimism… “Do we need another analysis of how a cultural representation does symbolic violence to a marginal group? This is not to say that this work has been useless, just that it’s become repetitive” (Bogost 2012a: 132).

In this ‘liberation’ therefore, we are saved from the ‘crushing’ problem of repetitive accounts of marginal inequality and suffering. This is achieved by a new ‘humanism’ that rejects the human as having any special case, such that the marginal problems of women, LGBT, immigrants, asylum seekers, and the poor are replaced with the problem of a litany of objects such as “quarks, Elizabeth Bennet, single-malt scotch, Ford Mustang fastbacks, lychee fruit, love affairs, dereferenced pointers, Care Bears, sirocco winds, the Tri-City Mall, tort law, the Airbus A330, the five-hundred drachma note” (Bogost 2012a: 133).

He notes, “If we take seriously the idea that all objects recede interminably into themselves, then human perception becomes just one among many ways that objects might relate. To put things at the centre of a new metaphysics also requires us to admit that they do not exist just for us” (Bogost 2012a: 9). Leaving aside the question as to why we would want to apply that idea in the first place when it stands as hypothesis rather than expressing any form of evidence or proof, one might wonder how one is to judge between the different forms of perception in order to (re)present the litanies, let alone recognize them. This is a constant and unexamined problem within the domain of object-oriented ontology and is hardly dealt with by Harman’s notion of ‘metaphor’ or ‘alluding’ to things (Harman 2009b).

Bogost too wants to move away from the tricky epistemological problem of “access”, and instead he concentrates on metaphor as a means of understanding the way in which objects, within this system, interact. This, oddly, avoids the very real problem of mediation in object-oriented ontology and moves the focus onto a form of information transfer about objects, rather than the practice of making those objects and object-orient ontologists’ claims about them.  In effect, “metaphor” describes an operation whereby the properties of an object are ‘represented’ within another object in order to facilitate some form of interaction (which might be vicarious). Bogost writes,

Ontology is the philosophical study of existence. Object-oriented ontology (“OOO” for short) puts things at the center of this study. Its proponents contend that nothing has special status, but that everything exists equally–plumbers, cotton, bonobos, DVD players, and sandstone, for example. In contemporary thought, things are usually taken either as the aggregation of ever smaller bits (scientific naturalism) or as constructions of human behavior and society (social relativism). OOO steers a path between the two, drawing attention to things at all scales (from atoms to alpacas, bits to blinis), and pondering their nature and relations with one another as much with ourselves (Bogost 2009, see also Bogost 2012: 6).

This definition is helpful in a number of ways, firstly it demonstrates in the move towards a flat ontology the attention has shifted from ontology (being) to things/objects (beings). The definition of everything as a single thing, in this case an object/unit – is precisely the danger that Heidegger identified for philosophy. The ‘Being’ that explains everything, the ‘Good’ for Plato, “Substance” for Spinoza, and “Object” for object-oriented ontologists. As Bryant remarks, “there is only one type of being: objects. As a consequence, humans are not excluded, but are rather objects among the various types of objects that exist or populate the world, each with their own specific powers and capacities” (Bryant 2011: 20, original emphasis). This is a problem, as “correctness” in identifying objects as beings does not, for me, make a sufficient ontology, as Heidegger argues

What is essential is not what we presumably establish with exactness by means of instruments and gadgets; what is essential is the view in advance which opens up the field for anything to be established (Heidegger 1995: 60).

Bogost’s work is exemplary and highly suggestive for the work of software studies and platforms studies, however, his descriptive work is an example of object-oriented onticology, rather than ontology as such. For me, this is worthy and important work, we do need to map certain kinds of objects and their interrelations, however, we also need to be aware of the consequences of certain ways of seeing and categorizing the world. The problem seems to be that object-oriented ontology has no notion of an exemplar, no special case, no shining examples. As such, it quickly descends into endless lists and litanies. As Heidegger observes,

So it happens that we, lost as we usually are in the activities of observing and establishing, believe we “see” many things and yet do not see what really is (Heidegger 1995: 60).

To draw back to the original question: what are the uses of object-oriented ontology? It seems to me that object-oriented ontology and speculative realism together reflect a worrying spirit of conservatism within philosophy. They discount the work of human activity and place it alongside a soporific litany of naturalised objects – a method that points less at the interconnected nature of things, and gestures more towards the infinity of sameness, the gigantic of objects, the relentless distanceless of a total confusion of beings (see Harman 2009a for a discussion of things and objects). In short, experience as passive, disoriented and overwhelming, what Heidegger described as the “terror” of pure unmitigated flatness. And with that, philosophy becomes ‘cold’ philosophy, instead of understanding, we have lists and litanies of objects. Not so much philosophy as philosography, where rather than understanding the world, there is an attempt to describe it, and a worrying tendency towards the administration of things through a cataloguing operation.

These litanies – cascades and tumbling threads of polythetic classification – are linked merely by sequence, in which each item has no need to bear any resemblance to the ones before or after. They posit no relationships, and offers no narrative connections, and are therefore “essentially uncontrollable: at the limit so indeterminable that anything can be connected with anything” (Anderson 2012). But of course there is a connection, a link, a thread, performed by the philosographer who chooses consciously or unconsciously the elements that make up the chain, and which are inscribed in books and articles. The use of object-oriented ontology, then, is bound up in its apparent conservatism which rallies at the temerity of human-beings to believe in themselves, their politics, and their specialness. Instead of World, object-oriented ontology posits universe, its founding principle is the Gigantic. As Heidegger explained:

1. The gigantism of the slowing down of history (from the staying away of essential decisions all the way to lack of history) in the semblance of speed and steer ability of “historical” [historisch] development and its anticipation.

2. The gigantism of the publicness as summation of everything homogeneous in favour of concealing the destruction and undermining of any passion for essential gathering.

3. The gigantism of the claim to naturalness in the semblance of what is self-evident and “logical”; the question-worthiness of being is placed totally outside questioning.

4. The gigantism of the diminution of beings in the whole in favour of the semblance of boundless extending of the same by virtue of unconditioned controllability. The single thing that is impossible is the word and representation of “impossible” (Heidegger 1999: 311).

To see what “shows up” to the philosographer one is unsurprised to see lists that are often contaminated by the products of neoliberal capitalism, objects which could not just appear of themselves, but required actual concrete labour of human beings to mediate their existence. For some reason, object-oriented ontology is attracted to the ephemerality of certain objects, as if by listing them they doubly affirm their commitment to realism, or that the longer the list the more ‘real’ it is. There is also the tendency to attempt to shock the reader by the juxtaposition of objects that would normally be thought to be categorically different – see Bogost (2009) for a discussion of whether including Harry Potter, blinis, and humans in a list was a striking enough example. These rhetorical strategies are interesting in thermselves, but I do not see them as replacements for philosophy. This demonstrates that the speculative realists have not escaped the so-called ‘correlationist circle’ (Harman 2009b), nor provided a model for thinking about the anti-correlationist paradox which remains present in their own work.

We should therefore ask object-oriented ontologist to move beyond merely staring at the objects they see around them and catch sight of what is being listed in their descriptive litanies.  That is, examining the lists they produce, we can see what kind of objects they see as near, and which they see as far, and therefore question their claims to see objects all the way down (see Bogost 2012: 83-84). Yet as we examine these lists there appears to be a profound forgetting of Being, as it were, as they write both for and as subjects of Late Capitalism – a fact which remains hidden from them – and a seemingly major aporia in their work.

Appendix I – A Litany of Litanies: Bogost’s (2012) Alien Phenomenology Litanies [5]

Page 3: “the Civil War soldier, the guilty Manhattan project physicist, the oval-headed alien anthropomorph, and the intelligent celestial race so much more credence than the scoria cone, the obsidian fragment, the gypsum crystal, the capsicum pepper, and the propane flame”

Page 5: “sea urchins, kudzu, enchiladas, quasars, and Tesla coils”, “harmonicas or tacos”

Page 6: “hammer, haiku, and hotdogs”, “quarks or neurons”, “plumbers, cotton, bonobos, DVD players, and sandstone”, “atoms to alpacas, bits to blinis”

Page 7: “scoria cone and the green chile”, “plate tectonics, enchiladas, tourism, or digestion”, “kudzu and grizzly bears”

Page 8: “Subways flood; pipes cool and crack; insects and weather slowly devour the wood frames of homes; the steel columns of bridges and skyscrapers corrode and buckle”, “plastic and lumber and steel”, “dogs, pigs, birds, and so forth”

Page 9: “plants, fungi, protists, bacteria, etc.”, “the potato and the cannabis [sic]”, “the dog or the raven”, “musket buckshot and gypsum and space shuttles”

Page 10: “molded plastic keys and controllers, motor-driven disc drives, silicon wafers, plastic ribbons, and bits of data”, “Subroutines and middleware libraries compiled into byte code or etched onto silicon, cathode ray tubes or LCD displays mated to be insulated, conductive cabling, and microprocessors executing machine instructions that enter and exit address buses”, “African elephant or the Acropora coral”, “computer or a microprocessor, or a ribbon cable”

Page 11: “The unicorn and the combine harvester, the color red and methyl alcohol, quarks and corrugated iron, Amelia Earhart and dyspepsia”

Page 12: “quarks, Harry Potter, keynote speeches, single-malt scotch, Land Rovers, lychee fruit, love affairs, dereferenced pointers, Mike ‘The Situation’ Sorrentino, bozons, horticulturalists, Mozambique, Super Mario Bros.”

Page 22: “yoghurt or tonsils or Winnie the Pooh”, “the cargo holds, the shipping containers, the hydraulic rams, the ballast water, the twist locks, the lashing rods, the crew, their sweaters, and the yarn out of which those garments are knit.”

Page 23: “cinder blocks and bendy straws and iron filings”

Page 25: “tailgate of a red pickup truck, the drum, handle, tailgate, asphalt, pepper, metal, and propane”, “pepper and iron, tailgate and Levi’s 501s, asphalt and pickup”, “brewing tea, shedding skin, photosynthesizing sugar, igniting compressed fuel.”

Page 26: “extraction, homogenization, distillation, refrigeration, etc.”

Page 27: “a mango, a willow tree, or a flat smooth stone”, “the cell… the revolving feeder… philology of the fictional Languages of Arda…”

Page 34: “Mountain summits and gypsum beds, chile roasters and buckshot, microprocessors and ROM chips”, “grease, juice, gunpowder and gypsum”

Page 39: “lighthouse, dragonfly, lawnmower, and barley”

Page 47: “Mullahs, and monsters, cushioned skyscrapers bent back on themselves”

Page 48: “black lampposts… the Snake River… a young girl…”

Page 49: “floodlight, screen print, Mastercard, rubber, asphalt, taco, Karmann Ghia, waste bin, oil stain”

Page 50: “tire an chassis, the ice milk and cup, the buckshot and soil”

Page 56: “puella, puellae, puellae (sic), puellam, puella

Page 58: “Dictionaries, grocery stores, Rio de Janeiro, La Brea, and Beverly”

Page 59: “doors, toasters and computers”

Page 61: “Smoke… dog teeth of a collar…. Chicken neck…”, “the taste of the honey-sweet ma’sal heated under the charcoal in the hookah’s bowl, or the sensation of foot on clutch as the collar of the synchro obtains a friction catch on the gear, or the smooth, thin appearance of broth as it separates from fat and bone in the soup pot”

Page 65: “Smoke and mouth, collar and gear, cartilage and water, bat and branch, roaster and green chile, button and input bus”

Page 74: “British men…, women, Congalese (sic), horses, and redwoods”, “fried chicken buckets, Pontiac Firebirds, and plastic picnicware”

Page 76: “the snowblower, the persimmon, the asphalt”

Page 109: “volcanoes, hookahs, muskets, gearshifts, gypsum, and soups”

Page 110: “painter, the seaman, the tightrope walker, or the banker”

Page 111: “people or toothbrushes or siroccos”, “words and ink and paper, a painting of pigments and canvas and medium, a philosophy of maxims and arguments and evidence, a house of studs and sheetrock and pipes”

Page 114: “Midgrade dealer D’Angelo Barksdale, detective James McNulty, kingpin Avon Barksdale, police lieutenant Cedric Daniels, stevedore Frank Sobotka, mayoral hopeful Tommy Carcetti, newspaper editor Gus Haynes”, “the Maryland Transit Authority bus that trundles through the Broadway East neighborhood; the synthetic morphine derivative diacetylmorphine hydrochloride, which forms the type of heroin power addicts freebase; Colt .45 (the firearm), and Colt 45 (the malt liquor)”, “dealers, cops, longshoreman, city councilmen, middle-school students, and journalists”

Page 115: “the compression heat of a diesel engine combustion chamber, or the manner by which corn or sugar additives increase the alcoholic content of malt, or the dissolution of heroin in water atop the concave surface of a spoon”

Page 117: “Clinker-built oak planks and fondant, keel, hull, and sponge cake, white-topped waves and spread frosting, oar stay and cookie”

Page 119: “the Kitchen-Aid 5 Quart Stand Mixer, the preheated oven, the mixing bowl, and the awaiting gullet”

Page 124: “religion, science, philosophy, custom, or opinion”, “flour granule, firearm, civil justice system, longship, fondant”, “cinder-blocks, Chicken McNuggets, freighter ships, and graffiti”

Page 133: “quarks, Elizabeth Bennet, single-malt scotch, Ford Mustang fastbacks, lychee fruit, love affairs, dereferenced pointers, Care Bears, sirocco winds, the Tri-City Mall, tort law, the Airbus A330, the five-hundred drachma note”

Notes

[1] I am grateful to Ian Bogost for arranging for a new copy of Alien Phenomenology to be sent to me after I received a curiously corrupted first copy.

[2] There is a striking computational construction to this statement and bares a deep affinity with the conceptualization within object-oriented programming.

[3] Elsewhere (Berry 2012) I have remarked on the computational nature of lists, more generally conceived as ‘collections’. Many programming languages were created to computationally manipulate lists, often called list-programming languages, such as LISP and PROLOG. The similarity with object-oriented ontology is extremely suggestive.

[4] Whether by accident or design Bogost compiles lists of seemingly ‘male’ items of interest: gears, machinery, Mexican food, digestive issues, and computer technology. It is also notable that certain items repeat and certain themes are easy to discern. This may be an ironic move, but it also reveals the partiality of the list-making method.

[5] This litany would not have been compiled without the kind invitation of Jill Rettberg and Scott Rettberg to present a paper on the New Aesthetic at the University of Bergen, 21/05/2012. On the return journey I opted to take the mountain train back to Oslo, which, lasting over six hours, gave me the time and distraction-free environment in which I could compile this list. I also learned that compiling litanies of litanies is at best painful and at worst something akin to mental torture. No objects were knowingly harmed in the compiling of this litany.

Bibliography

Anderson, P. (2012) The Force of the Anomaly, The London Review of Books, April 26th 2012.

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave.

Berry, D. M. (2012) What is the New Aesthetic?, Stunlaw, accessed 22/05/2012, http://stunlaw.blogspot.com/2012/04/what-is-new-aesthetic.html

Bogost, I. (2009) What is Object-Oriented Ontology? A Definition For Ordinary Folk, accessed 20/05/2012, http://www.bogost.com/blog/what_is_objectoriented_ontolog.shtml

Bogost, I. (2009b) Object-Oriented P*, accessed 23/05/2012, http://www.bogost.com/blog/objectoriented_p.shtml

Bogost, I. (2011) Writing Book People Want To Read: Or How To Stake Vampire Publishing, accessed 23/05/2012, http://www.bogost.com/blog/writing_books_people_want_to_r.shtml

Bogost, I. (2012a) Alien Phenomenology: or What It’s Like To Be A Thing, Minnesota University Press.

Bogost, I. (2012b) The New Aesthetic Needs to Get Weirder, The Atlantic, accessed 18/04/2012, http://www.theatlantic.com/technology/archive/2012/04/the-new-aesthetic-needs-to-get-weirder/255838/

Bryant, L. (2011) The Democracy of Objects, Open Humanities Press.

 

Charlesworth, J. J. (2012) We are the droids we’re looking for: the New Aesthetic and its friendly critics, accessed 25/05/2012, http://blog.jjcharlesworth.com/2012/05/07/we-are-the-droids-were-looking-for-the-new-aesthetic-and-its-friendly-critics/

Harman, G. (2009a) Technology, objects and things in Heidegger, Cambridge Journal of Economics, accessed 18/04/2012, http://cje.oxfordjournals.org/content/early/2009/05/29/cje.bep021.full?ijkey=oxf1js0onhVC73f&keytype=ref

Harman, G. (2009b) what correlationism reminds me of, accessed 23/05/2012, http://doctorzamalek2.wordpress.com/2009/11/08/what-correlationism-reminds-me-of/

Heidegger, M. (1995) Basic Questions of Philosophy: Selected Problems of Logic, Indiana University Press.

Heidegger, M. (1999) Contributions to Philosophy (From Enowning), Indiana: Indiana University Press.

Morton, T. (2011) Here Comes Everything: The Promise of Object-Oriented Ontology, Qui Parle, accessed 25/05/2012, http://ucdavis.academia.edu/TimMorton/Papers/971122/Here_Comes_Everything_The_Promise_of_Object-Oriented_Ontology

Tagged , , , , , ,

Glitch Ontology

The digital (or computational) presents us with a number of theoretical and empirical challenges which we can understand within this commonly used set of binaries:

  • Linearity vs Hypertextuality
  • Narrative vs Database
  • Permanent vs Ephemeral
  • Bound vs Unbound
  • Individual vs Social
  • Deep vs Shallow
  • Focused vs Distracted
  • Close Read vs Distant Read
  • Fixed vs Processual
  • Digital (virtual) vs Real (physical)

Understanding the interaction between the digital and physical is part of the heuristic value that these binaries bring to the research activity. However, in relation to the interplay between the digital and the cultural, examples, such as Marquese Scott’s Glitch inspired Dubstep dancing (below), raise important questions about how these binaries interact and are represented in culture more generally (e.g. as notions of The New Aesthetic).

Glitch inspired Dubstep Dancing (Dancer: Marquese Scott)

Here, I am not interested in critiquing the use of binaries per se (but which of course remains pertinent – and modulations might be a better way to think of digital irruptions), rather I think they are interesting for the indicative light they cast on drawing analytical distinctions between categories and collections related to the digital itself. We can see them as lightweight theories, and as Moretti (2007) argues:

Theories are nets, and we should evaluate them, not as ends in themselves, but for how they concretely change the way we work: for how they allow us to enlarge the… field, and re-design it in a better way, replacing the old, useless distinctions… with new temporal, special, and morphological distinctions (Moretti 2007: 91, original emphasis).

These binaries can be useful means of thinking through many of the positions and debates that take place within both theoretical and empirical work on mapping the digital.

  1. Linear versus Hypertextuality: The notion of a linear text, usually fixed within a paper form, is one that has been taken for granted within the humanities. Computational systems, however, have challenged this model of reading because of the ease by which linked data can be incorporated into digital text. This has meant that experimentation with textual form and the way in which a reader might negotiate a text can be explored. Of course, the primary model for hypertextual systems is today strongly associated with the worldwide web and HTML, although other systems have been developed.
  2. Narrative versus Database: The importance of narrative as an epistemological frame for understanding has been hugely important in the humanities. Whether as a starting point for beginning an analysis, or through attempts to undermine of problematize narratives within texts, humanities scholars have usually sought to use narrative as an explanatory means of exploring both the literary and history. Computer technology, however, has offered scholars an alternative way of understanding how knowledge might be structured through the notion of the database. This approach personified in the work of Lev Manovich (2001) has been argued to represent an important aspect to digital media, and more importantly the remediation of old media forms in digital systems.
  3. Permanent versus Ephemeral: One of the hallmarks of much ‘traditional’ or ‘basic’ humanities scholarship has been concerned with objects and artifacts that have been relatively stable in relation to digital works. This especially in disciplines that have internalized the medium specificity of a form, for example the book in English Literature, which shifts attention to the content of the medium. In contrast, digital works are notoriously ephemeral in their form, both in the materiality of the substrates (e.g. computer memory chips, magnetic tape/disks, plastic disks, etc.) but also in the plasticity of the form.  This also bears upon the lack of an original from which derivative copies are made, indeed it could be argued that in the digital world there is only the copy (although recent moves in Cloud computing and digital rights management are partial attempts to re-institute the original through technical means).
  4. Bound versus Unbound: A notable feature of digital artifacts is that they tend to be unbound in character. Unlike books, which have clear boundary points marked by the cardboard that makes up the covers, digital objects boundaries are drawn by the file format in which they are encoded. This makes it an extremely permeable border, and one that is made of the same digital code that marks the content. Additionally, digital objects are easily networked and aggregated, processed and transcoded into other forms further problematizing a boundary point.  In terms of reading practices, it can be seen that the permeability of boundaries can radically change the reading experience.
  5. Individual versus Social: traditional humanities has focused strongly on approaches to texts that is broadly individualistic inasmuch as the reader is understood to undertake certain bodily practices (e.g. sitting in a chair, book on knees, concentration on the linear flow of text). Digital technologies, particularly when networked, open these practices up to a much more social experience of reading, with e-readers like the Amazon Kindle encouraging the sharing of highlighted passages, and Tumblr-type blogs and Twitter enabling discussion around and within the digital text.
  6. Deep versus Shallow: Deep reading is the presumed mode of understanding that requires time and attention to develop a hermeneutic reading of a text, this form requires humanistic reading skills to be carefully learned and applied. In contrast a shallow mode is a skimming or surface reading of a text, more akin to gathering a general overview or précis of the text.
  7. Focused versus Distracted: Relatedly, the notion of focused reading is also implicitly understood as an important aspect of humanities scholarship. This is the focus on a particular text, set of texts or canon, and the space and time to give full attention to them. By contrast, in a world of real-time information and multiple windows on computer screens, reading practices are increasingly distracted, partial and fragmented (hyperattention).
  8. Close Reading versus Distant Reading: Distant reading is the application of technologies to enable a great number of texts to be incorporated into an analysis through the ability of computers to process large quantities of text relatively quickly. Moretti (2007) has argued that this approach allows us to see social and cultural forces at work through collective cultural systems.
  9. Fixed versus Processual: The digital medium facilitates new ways of presenting media that are highly computational, this raises new challenges for scholarship into new media and the methods for approaching these mediums. It also raises questions for older humanities that are increasingly accessing their research object through the mediation of processural computational systems, and more particularly through software and computer code.
  10. Real (physical) versus Digital (virtual): This is a common dichotomy that draws some form of dividing line between the so-called real and the so-called digital.

The New Aesthetic ‘pixel’ fashion

I am outlining these binaries because I think they are useful for helping us to draw the contours of what I call elsewhere ‘computationality’, and for its relationship to the New Aesthetic. In order to move beyond a ‘technological sublime’, we should begin the theoretical and empirical projects through the development of ‘cognitive maps’ (Jameson 1990). Additionally, as the digital increasingly structures the contemporary world, curiously, it also withdraws, and becomes harder and harder for us to focus on as it is embedded, hidden, off-shored or merely forgotten about. Part of the challenge is to bring the digital (code/software) back into visibility for research and critique.

The New Aesthetic is a means for showing how the digital surfaces in a number of different places and contexts.  It is not purely digital production or output, it can also be the concepts and frameworks of digital that are represented (e.g. Voxels). Although New Aesthetic has tended to highlight 8-bit visuals and ‘sensor-vernacular’ or ‘seeing like a machine’ (e.g. Bridle/Sterling) I believe there is more to be explored in terms of ‘computationality’. When identified as such the ‘New Aesthetic’ is a useful concept, in relation to being able to think through and about the visual representation of computationality. Or better, to re-present the computational more generally and its relationship to a particular way-of-being in the world and its mediation through technical media (here specifically concerned with computational media).

Preen Spring/Summer 2012 | Source: Style.com

Previously I argued that this New Aesthetic is a form of ‘abduction aesthetic’ linked to the emergence of computationality as an ontotheology. Computationality is here understood as a specific historical epoch defined by a certain set of computational knowledges, practices, methods and categories. Abductive aesthetic (or pattern aesthetic) is linked by a notion of computational patterns and pattern recognition as a means of cultural expression. I argue that we should think about software/code through a notion of computationality as an ontotheology. Computationality (as an ontotheology) creates a new ontological ‘epoch’ as a new historical constellation of intelligibility. In other words, code/software is the paradigmatic case of computationality, and presents us with a research object which is located at all major junctures of modern society and is therefore unique in enabling us to understand the present situation – as a collection, network, or assemblage of ‘coded objects’ or ‘code objects’.

Computationality is distinct from the ‘challenging-forth’ of technicity as Heidegger described it – in contrast computationality has a mode of revealing that is a ‘streaming-forth’. One aspect of this is that streaming-forth generates second-order information and data to maintain a world which is itself seen and understood as flow but drawn from a universe which is increasingly understood as object-oriented and discrete. Collected information is processed, feedback is part of the ecology of computationality. Computational devices not only withdraw – indeed mechanical devices such as car engines clearly also withdraw – computational devices both withdraw and are constantly pressing to be present-at-hand in alternation. This I call a form of glitch ontology.

Technicity

(modern technology)

Computationality (postmodern technology)

Mode of Revealing

Challenging-forth (Gestell)

Streaming-forth

Paradigmatic Equipment

Technical devices, machines.

Computational devices, computers, processors.

Goals (projects)

1. Unlocking, transforming, storing, distributing, and switching about Standing Reserve (Bestand).

2. Efficiency.

1. Trajectories,  Processing information, Algorithmic transformation (aggregation, reduction, calculation), as data reserve (Cloudscape).

2. Computability.

Identities (roles)

Ordering-beings

Streaming-beings

Paradigmatic Epistemology

Engineer: Engineering is exploiting basic mechanical principles to develop useful tools and objects. For example using: Time-motion studies, Methods-Time Measurement (MTM), instrumental rationality.

Design: Design is the construction of an object or a system but not just what it looks like and feels like. Design is how it works and the experience it generates. For example using: Information theory, graph theory,  data visualisation, communicative rationality, real-time streams.

Table 1: Technicity vs Computationality

Computational devices appear to oscillate rapidly between Vorhandenheit/Zuhandenheit (present-at-hand/ready-to-hand) – a glitch ontology. Or perhaps better, constantly becoming ready-to-hand/unready-to-hand in quick alternation. And by quick this can be happening in microseconds, milliseconds, or seconds, repeatedly in quick succession. This aspect of breakdown has been acknowledged as an issue within human-computer design and is seen as one of pressing concern to be ‘fixed’ or made invisible to the computational device user (Winograd and Flores 1987).

The oscillation creates the ‘glitch’ that is a specific feature of computation as opposed to other technical forms (Berry 2011). This is the glitch that creates the conspicuousness that breaks the everyday experience of things, and more importantly breaks the flow of things being comfortably at hand. This is a form that Heidegger called Unreadyness-to-hand (Unzuhandenheit). Heidegger defines three forms of unreadyness-to-hand: Obtrusiveness (Aufdringlichkeit), Obstinacy (Aufsässigkeit), and Conspicuousness (Auffälligkeit), where the first two are non-functioning equipment and the latter is equipment that is not functioning at its best (see Heidegger 1978, fn 1). In other words, if equipment breaks you have to think about it.

It is important to note that conspicuousness is not completely broken-down equipment. Conspicuousness, then, ‘presents the available equipment as in a certain unavailableness’ (Heidegger 1978: 102–3), so that as Dreyfus (2001: 71) explains, we are momentarily startled, and then shift to a new way of coping, but which, if help is given quickly or the situation is resolved, then ‘transparent circumspective behaviour can be so quickly and easily restored that no new stance on the part of Dasein is required’ (Dreyfus 2001: 72). As Heidegger puts it, it requires ‘a more precise kind of circumspection, such as “inspecting”, checking up on what has been attained, [etc.]’ (Dreyfus 2001: 70).

In other words computation, due to its glitch ontology, continually forces a contextual slowing-down at the level of the mode of being of the user, thus the continuity of flow or practice is interrupted by minute pauses and breaks (these may beyond conscious perception, as such). This is not to say that analogue technologies do not break down, the difference is the conspicuousness of digital technologies in their everyday working, in contrast to the obstinacy or obtrusiveness of analogue technologies, which tend to work or not. I am also drawing attention to the discrete granularity of the conspicuousness of digital technologies, which can be measured technically as seconds, milliseconds, or even microseconds. This glitch ontology raises interesting questions in relation to basic questions about our experiences of computational systems.

My interest in the specificity of the New Aesthetic is because of its implicit recognition of the extent to which digital media has permeated our everyday lives. We could perhaps say that the New Aesthetic is a form of ‘breakdown’ art linked to the conspicuousness of digital technologies. Not just the use of digital tools, of course, but also a language of new media (as Manovich would say), the frameworks, structures, concepts and processes represented by computation. That is both the presentation of computation and its representational modes. It is also to the extent both that it represents computation, but also draws attention to this glitch ontology, for example through the representation of the conspicuousness of glitches and other digital artefacts (also see Menkman 2010, for a notion of critical media aesthetics and the idea of glitch studies).

Other researchers (Beaulieu et al 2012) have referred to ‘Network Realism’ to draw attention to some of these visual practices. Particularly the way of producing these networked visualisation. However, the New Aesthetic is interesting in remaining focussed on the aesthetic in the first instance (rather than the sociological, etc.). This is useful in order to examine the emerging visual culture, but also to try to discern aesthetic forms instantiated within it.

As I argued previously, the New Aesthetic is perhaps the beginning of a new kind of Archive, an Archive in Motion – what Bernard Stiegler (n.d.) called the Anamnesis (the embodied act of memory as recollection or remembrance) combined with Hypomnesis (the making-technical of memory through writing, photography, machines, etc.). Thus, particularly in relation to the affordances given by the networked and social media within which it circulates, combined with a set of nascent practices of collection, archive and display, the New Aesthetic is distinctive in a number of ways.

Firstly, it gives a description and a way of representing and mediating the world in and through the digital, that is understandable as an infinite archive (or collection). Secondly, it alternately highlights that something digital is a happening in culture – and which we have only barely been conscious of – and the way in which culture is happening to the digital. Lastly, the New Aesthetic points the direction of travel for the possibility of a Work of Art in the digital age – something Heidegger thought impossible under the conditions of technicity, but remains open, perhaps under computationality.

In this, the New Aesthetic is, however, a pharmakon, in that it is both potentially poison and cure for an age of pattern matching and pattern recognition. If the archive was the set of rules governing the range of expression following Foucault, and the database the grounding cultural logic of software cultures following Manovich, we might conclude that the New Aesthetic is the cultural eruption of the grammatisation of software logics into everyday life. The New Aesthetic under a symptomology, can be seen surfacing computational patterns, and in doing so articulates and re-presents the unseen and little understood logic of computation, which lies like plasma under, over, and in the interstices between the modular elements of an increasingly computational society.

Bibliography

Beaulieu, A. and de Rijcke, S. (2012) Network Realism, accessed 20/05/2012, http://networkrealism.wordpress.com/

Dreyfus, H. (2001) Being-in-the-world: A Commentary on Heidegger’s Being and Time, Division I. USA: MIT Press.

Heidegger, M. (1978) Being and Time. London: Wiley–Blackwell.

Jameson, F. (2006) Postmodernism or the Cultural Logic of Late Capitalism, in Kellner, D. Durham, M. G. (eds.) Media and Cultural Studies Keyworks, London: Blackwell.

Manovich, L. (2001) The Language of New Media. London: MIT Press.

Menkman, R. (2010) Glitch Studies Manifesto, accessed 20/5/2012, http://rosa-menkman.blogspot.com/2010/02/glitch-studies-manifesto.html

Moretti, F. (2007) Graphs, Maps, Trees: Abstract Models for a Literary History, London, Verso.

Stiegler, B. (n.d.)  Anamnesis and Hypomnesis, accessed 06/05/2012, http://arsindustrialis.org/anamnesis-and-hypomnesis

Winograd, T. and Flores, F. (1987) Understanding Computers and Cognition: A New Foundation for Design, London: Addison Wesley.

New Book: Life in Code and Software: Mediated life in a complex computational ecology

Life in Code and Software (cover image by Michael Najjar)
New book out in 2012 on Open Humanities PressLife in Code and Software: Mediated life in a complex computational ecology. 


This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. Life in Code and Software introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological, which I call computationality, and within which, code and software become the paradigmatic forms of knowing and doing. Such that other candidates for this role, such as: air, the economy, evolution, the environment, satellites, etc., are understood and explained through computational concepts and categories.




Taking Care of the New Aesthetic

Strangely, and somewhat unexpectedly, James Bridle unilaterally closed the New Aesthetic Tumblr blog today, 6 May 2012, announcing ‘The New Aesthetic tumblr is now closed’, with some particular and general thanks and very little information about future plans. Perhaps this was always Bridle’s intention as a private project, but one can’t help wonder if the large amount of attention, the move to a public and contested concept, and the loss of control that this entailed may have encouraged a re-assertion of control. If so, this is a great pity and perhaps even an act of vandalism.

Harpa, Iceland  (Berry 2011)

This, then, is a critical turning point, or krisis,[1] for the nascent New Aesthetic movement, and, for me, the blog closure heralds an interesting struggle over what is the New Aesthetic? Who owns or controls it? And in what directions it can now move.? Certainly, I am of the opinion that to have closed the blog in this way insinuates a certain proprietary attitude to the New Aesthetic. Considering that the Tumblr blog has largely been a crowd-sourced project, giving no explanation, allowing no debate, discussion over the closure, etc. makes it look rather like it harvested peoples’ submissions on what could have been a potentially participatory project. Whichever way it is cast, James Bridle looks rather high-handed in light of the many generous and interesting discussions that the New Aesthetic has thrown up across a variety of media.

One of the key questions will be the extent to which this blog was a central locus of, or collection for representing, the New Aesthetic more generally. Personally I found myself less interested in the Tumblr blog that became increasingly irrelevant in light of the high level of discussion found upon Imperica, The Creators Project, The Atlantic, Crumb and elsewhere. But there is clearly a need for something beyond the mere writing and rewriting of the New Aesthetic that many of the essays around the topic represented. Indeed, there is a need for an inscription or articulation of the New Aesthetic through multiple forms, both visual and written (not to mention using the sensorium more generally). I hope that we will see a thousand New Aesthetic Pinterest, Tumblr, and PinIt sites bloom across the web.

Urban Cursor is a GPS enabled object (Sebastian Campion 2009)

Nonetheless, it is disappointing to see the number of twitter commentators who have tweeted the equivalent of ‘well, that was that’, as if the single action of an individual is decisive in stifling a new and exciting way of articulating a way of being in the world. Indeed, this blog closure highlights the importance of taking care of the New Aesthetic, especially in its formative stages of development. Whilst there have been a number of dismissive and critical commentaries written about the New Aesthetic, I feel that there is a kernel of something radical and interesting happening and which still remains to be fully articulated, expressed, and made manifest in and through various mediums of expression.

The New Aesthetic blog might be dead, but the New Aesthetic as a way of conceptualising the changes in our everyday life that are made possible in and through digital technology is still unfolding. For me the New Aesthetic was not so much a collection of things as the beginning of a new kind of Archive, an Archive in Motion, which combined what Bernard Stiegler called the Anamnesis (the embodied act of memory as recollection or remembrance) and Hypomnesis (the making-technical of memory through writing, photography, machines, etc.). Stiegler writes,

We have all had the experience of misplacing a memory bearing object – a slip of paper, an annotated book, an agenda, relic or fetish, etc. We discover then that a part of ourselves (like our memory) is outside of us. This material memory, that Hegel named objective, is partial. But it constitutes the most precious part of human memory: therein, the totality of the works of spirit, in all guises and aspects, takes shape (Stiegler n.d.).

Thus, particularly in relation to the affordances given by the networked and social media within which it circulated, combined with a set of nascent practices of collection, archive and display, the New Aesthetic is distinctive in a number of ways. Firstly, it gives a description and a way of representing and mediating the world in and through the digital, that is understandable as an infinite archive (or collection). Secondly, it alternately highlights that something digital is happening in culture – and which we have only barely been conscious of – and the way in which culture is happening to the digital.  Lastly, the New Aesthetic points the direction of travel for the possibility of a Work of Art in the digital age.

In this, the New Aesthetic is something of a pharmakon, in that it is both potentially poison and cure for an age of pattern matching and pattern recognition. In as much as the archive was the set of rules governing the range of what can be verbally, audio-visually or alphanumerically expressed at all, and the database is the grounding cultural logic of software cultures, the New Aesthetic is the cultural eruption of the grammatisation of software logics into everyday life. That is, the New Aesthetic is a deictic moment which sheds light on changes in our lives that imperil things, practices, and engaging human relations, and the desire to make room for such relations, particularly when they are struggling to assert themselves against the dominance of neoliberal governance, bureaucratic structures and market logics.[2]

The New Aesthetic, in other words, brings these patterns to the surface, and in doing so articulates the unseen and little understood logic of computational society and the anxieties that this introduces.

Notes

[1] krisis: a separating, power of distinguishing, decision , choice, election, judgment, dispute.


[2] A deictic explanation is here understood as one which articulates a thing or event in its uniqueness. 

Bibliography

Stiegler, B. (n.d.)  Anamnesis and Hypomnesis, accessed 06/05/2012, http://arsindustrialis.org/anamnesis-and-hypomnesis

Advertisements