Category Archives: digital humanities

Against the Computational Creep

In this short post I want to think about the limits of computation, not the limits theoretically of the application or theorisation of computation itself, but actually the limits to which computation within a particular context should be contained. This is necessarily a normative position, but what I am trying to explore is the limit at which computation, which can have great advantages to a process, institution or organisation, starts to undermine or corrode the way in which a group, institution or organisation is understood, functions or how it creates a shared set of meanings. Here though, I will limit myself to thinking about the theorisation of this, rather than its methodological implications, and how we might begin to develop a politics of computation that is able to test and articulate these limits and understand the development of a set of critical approaches which are also a politicisation of algorithms and of data.

By computational creep I am interested in the development of computation as a process rather than an outcome or thing (Ross 2017: 14). This notion of “creep” has been usefully identified by Ross in relation to extreme political movements that take place by what he calls “positive intermingling”.[1] I think that this is a useful way to think of the way in which computationalism, and here I do not merely mean the idea that consciousness in modelled on computation (e.g. see Golumbia 2009), but more broadly as a set of ideas and style of thought which argues that computational approaches are by their very nature superior to other ways of thinking and doing (Berry 2011, 2014). This is also related to the notion that anything that has not been “disrupted” by computation is, by definition, inferior in some sense, or is latent material awaiting its eventual disruption or reinvention through the application of computation.  I would like to argue that this process of computational creep takes six stages:

  1. Visionary-computational: Computation suggested as a solution to existing system or informal process. These discourses are articulated with very little critical attention to the detail of making computational systems or the problems they create. Usually, as Golumbia (2017) explains, these draw on metaphysics of information and computation that bear little relation to material reality of the eventual or existing computational systems. It is here, in particular, that the taken-for-grantness of the improvements of computation are uncritically deployed, usually with little resistance. 
  2. Proto-computational: One-off prototypes developed to create notional efficiencies, manage processes, or to ease reporting and aggregation of data. Often there is a discourse associated with the idea that this creates “new ways of seeing” that enable patterns to be identified which were previously missed. These systems often do not meet the required needs but these early failures, rather than taken as questioning the computational, serve to justify more computation, often more radically implemented with greater change being called for in relation to making the computational work. 
  3. Micro-computational: A wider justification for small scale projects to implement computational microsystems. These often are complemented by the discursive rationalisation of informal processes or the justification of these systems due to the greater insight they produce. This is where a decision has been taken to begin computational development, sometimes at a lightweight scale, but nonetheless, the language of computation both technically and as metaphor starts to be deployed more earnestly as justification. 
  4. Meso-computational: Medium-scale systems created which draw from or supplement the existing minimal computation already in process. This discourse is often manifest in multiple, sometime co-exisiting and incompatible computations, differing ways of thinking about algorithms as a solution to problems, and multiple and competing data acquisition and storage practices. At this stage the computational is beyond question, it is taken as a priori that a computational system is required, and where there are failures, more computation and more social change to facilitate it are demanded. 
  5. Macro-computational: Large-scale investment to manage what has become a complex informational and computational ecology. This discourse is often associated with attempts to create interoperability through mediating systems or provision for new interfaces for legacy computational systems. At this stage, computation is now seen as as source of innovation and disruption that rationalises the social processes and helps manage and control individuals. These are taken to be a good in and of themselves to avoid mistakes, bad behaviour, poor social outcomes or suchlike. The computational is now essentially metaphysical in its justificatory deployment and the suggestion that computation might be making things worse is usually met with derision. 
  6. Infra-computational: Calls for overhaul of and/or replacement of major components of the systems, perhaps with a platform, and the rationalisation of social practices through user interface design, hierarchical group controls over data, and centralised data stores. This discourse is often accompanied by large scale data tracking, monitoring and control over individual work and practices. This is where the the notion of top-view, that is the idea of management information systems (MIS), data analytics, large-scale Big Data pattern-matching and control through algorithmic intervention are often reinforced. In this phase a system of data requires free movement of the data through a system through an open definition (e.g. open data, open access, open knowledge), which allows standardisation and sharability of data entities, and therefore of further processing and softwarization. This phase often serves as an imaginary and is therefore not necessarily ever completed, its failures serving as further justification for new infrastructures and new systems to replace earlier failed versions. 

This line of thinking draws on the work of David Golumbia, particularly the notion of Matryoshka Dolls that he takes from the work of Phillip Mirowski. This is in relation to the notion of multiple levels or shells of ideas, that form a system of thinking, but which is itself not necessarily coherent as such, nor lacking in contradiction, particularly at different layers of the shells. This “Mirowski calls the ‘’Russian doll’ approach to the integration of research and praxis in the modern world'” (Golumbia 2017: 5). Golumbia makes links between this way of thinking about neoliberalism as a style of thinking that utilises this multi-layered aspect and technolibertarianism, but here I want to think about computational approaches more broadly, that is as instrumental rational techniques of organisation. In other words, I want to point to the way in which computation is implemented, usually in a small scale way, within an institutional context, and which acts as an entry-point for further rationalisation and computation. This early opening creates the opportunity for more intensive computation which is implicated in a bricolage fashion, that is that, at least initially, there is not a systematic attempt to replace an existing system, but over time, and with the addition to and accretion of computational partialities, calls become greater for the overhaul of what is now a tangled and somewhat contradictory series of micro-computationalisms, into a more broad computational system or platform. Eventually this leads to a macro- or infra-computational environment which can be described as functioning as algorithmic governmentality, but which remains ever unfinished with inconsistencies, bugs and irrationalities throughout the system (see Berns and Rouvroy 2013). The key point is that in all stages of computationally adapting an existing process, there are multiple overlapping and sometimes contradictory processes in operation, even in large-scale computation.

Here I think that Golumbia’s discussion of the “sacred myths among the digerati” is very important here, as it is this set of myths that are unquestioned especially early on in the development of a computational project. Especially at what I am calling the visionary-computational and proto-computational phases, but equally throughout the growth in computational penetration. Some of these myths include: claims of efficiency, the notion of cost savings, the idea of communications improvement, and the safeguarding corporate or group memory. In other words, before a computerisation project is started, these justifications are already being mobilised in order to justify it, without any critical attention to where these a priori claims originate and their likely truth content.

This use of computation is not just limited to standardised systems, of course, and by which I mean instrumental-rational systems that are converted from a paper-based process into a software-based process. Indeed, computation is increasingly being deployed in a cultural and sociological capacity, so for example to manage individuals and their psychological and physical well-being, to manage or shape culture through interventions and monitoring, and the capacity to work together, as teams and groups, and hence to shape particular kinds of subjectivity. Here there are questions more generally for automation and the creation of what we might call human-free technical systems, but also more generally for the conditions of possibility for what Bernard Stiegler calls the Automatic Society (Stiegler 2015). It is also related to the notion of digital and computational systems in areas not previously thought of as amenable to computation, for example in the humanities, as is represented by the growth of digital humanities (Berry 2012, Berry and Fagerjord 2017).

That is to say, that “the world of the digital is everywhere structured by these fictionalist equivocations over the meanings of central terms, equivocations that derive an enormous part of their power from the appearance that they refer to technological and so material and so metaphysical reality” (Golumbia 2017: 34). Of course, the reality is that these claims are often unexamined and uncritically accepted, even when they are corrosive in their implementations. Where these computationalisms are disseminated and their creep goes beyond social and cultural norms, it is right that we ask: how much computation can a particular social group or institution stand, and what should be the response to it? (See Berry 2014: 193 for a discussion in relation to democracy). It should certainly be the case that we must move beyond accepting a partial success of computation to imply that more computation is by necessity better. So by critiquing computational creep, through the notion of the structure of the Russian doll in relation to computational processes of justification and implementation, together with the metaphysical a priori claims for the superiority of computational systems, we are better able to develop a means of containment or algorithmic criticism. Thus through a critical theory that provides a ground for normative responses to the unchecked growth of computations across multiple aspects of our lives and society we can look to the possibilities of computation without seeing it as necessarily inevitable or deterministic of our social life (see Berry 2014).

Notes

[1] The title “Against the Computational Creep” is reference to the very compelling book Against the Fascist Creep by Alexander Reid Ross. The intention is not to make an equivalence between fascism and computation, rather I am more interested in the concept of the “creep” which Ross explains involves small scale, gradual use of particular techniques, the importation of ways of thinking or the use of a form of entryism. In this article, of course, the notion of the computational creep is therefore referring to the piecemeal use of computation, or the importation of computational practices and metaphors into a previously non-computational arena or sphere, and the resultant change in the ways of doing, ways of seeing and ways of being that this computational softwarization tends to produce. 

Bibliography

Berns, T. and Rouvroy, A. (2013) Gouvernementalité algorithmique et perspectives d’émancipation : le disparate comme condition d’individuation par la relation?, accessed 14/12/2016, https://works.bepress.com/antoinette_rouvroy/47/download/

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave Macmillan.

Berry, D. M. (2012) Understanding Digital Humanities, Basingstoke: Palgrave.

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury

Berry, D. M. and Fagerjord, A. (2017) Digital Humanities: Knowledge and Critique in a Digital Age, Cambridge: Polity.

Golumbia, D. (2009) The Cultural Logic of Computation, Harvard University Press.

Golumbia, D. (2017) Mirowski as Critic of the Digital, boundary 2 symposium, “Neoliberalism, Its Ontology and Genealogy: The Work and Context of Philip Mirowski”, University of Pittsburgh, March 16-17, 2017

Stiegler, B. (2016) The Automatic Society, Cambridge: Polity.

Advertisements

The Uses of Open Access

It is increasingly clear that the university is undergoing rapid change in higher education systems right across the globe. This is partly due to the forces of digital technology, partly due to neoliberal restructuring of the higher education sector by governments, and partly due to a shift in student demographics, expectations and a new consumerist orientation. However, there is an additional pressure on universities, and an illogical one at that, which is the claim that they do not contribute to the public good through their practices of publication. This claim has more recently come from open access (OA) advocates, but also increasingly from governments that seek to use university research as a stimulus to economic growth. This claim is without foundation and unhistorical but it is a claim that is being made with greater stridency and is being taken up by research funders and university managements as an accurate state of affairs that they seek to remedy through new policies and practices related to academic publication. It is time, as Allington has convincingly argued, that we ask “what’s [OA] for? What did [OA’s] advocates… think it was going to facilitate? And now that it’s become mainstream, does it look as if it’s going to facilitate that thing we had in mind, or something else entirely?” (Allington 2013).

In this article I want to start to explore some of the major themes that I think need to be addressed in the current push towards open access but also how it serves as a useful exemplar of the range of “innovations” being forced on the university sector. With such a large subject I can only gesture to some of the key issues here, but my aim is to start to unpick some of the more concerning claims of open access advocates and question why their interests, government proposals and university management are too often oriented in the same direction. I want to suggest that this is not accidental, and actually reflects an underlying desire to “disrupt” the academy which will have dire implications for academic labour, thought and freedom if it is not contested. Whilst it is clear some open access advocates believe that their work will contribute to and further the public good, without an urgent critique of the rapidity and acceleration of these practices, the university, as it has been historically constituted through the independent work of scholars, will be undermined and the modern university as we have come to understand it may be transformed into a very different kind of institution.[1]

Within this new complex landscape of the university, there is has been the remarkable take up and acceleration of the notion of mandated Open Access (OA). Open Access is the use of copyright licenses to make textual materials available to others for use and reuse through a mechanism similar to that which was created by the Free Software Foundation as the GNU Public Licence (GPL) and later through the activities of the open access movement and the Creative Commons organisation. The FLOSS (Free Libre and Open Source Software) movement and the Creative Commons have been important in generating new ways of thinking about copyright, but also in generating spaces for the construction of new technologies and cultural remixes, particularly the GNU GPL licence and the Creative Commons Share-Alike licence (Berry 2008). Nonetheless, these new forms of production around copyright licenses have not been free of politics, and often carry with them cyberlibertarian notions about how knowledge should be treated, how society should be structured, and the status of the individual in a digital age (see Berry 2008). These links between the ways of thinking shared between open source and open access raise particular concerns. As Golumbia has cautioned, “in general, it is the fervor for OA… especially as expressed in the idea that OA should be mandated at either the institutional or governmental level… [that] seems far more informed by destructive intent and ideology toward certain existing institutions and practices than its most strident advocates appear to recognize, even as they openly recommend that destruction” (Golumbia 2016: 76).

However, it is important to note at this point that I agree with Golumbia that,

this does not mean that OA is uniformly a bad idea: it is not. In many ways it is, very clearly, a good idea. In particular, versions of voluntary “green” OA, where researchers may or may not deposit copies of their works wherever and under whatever conditions they choose, and the voluntary creation of OA journals when not accompanied by pressure, institutional or social, to refrain from publishing in non-OA journals, strike me as welcome… But it is a good idea that has been taken far beyond the weight that the arguments for it can bear, and frequently fails to take into account matters that must be of fundamental concern to any left politics. Further, it is a good idea that is surrounded by a host of ideas that are nowhere near as good, and that fit too easily into the general rightist attack on higher education, especially in the humanities, that operates worldwide today (Golumbia 2016: 76).

To examine these issues, first I want to briefly explore the new political economic reality that has been facing the university in the late 20th and early 21st century. Indeed, we have seen these changes mapped out in a number of important recent publications about the UK and USA university systems (see for example, Collini 2012, 2017; Holmwood 2011; Readings 1996). Under this new regime, it is argued that the student is cast as consumer, and the academic is recast as an academic entrepreneur who must constantly seek to make “impact” through activities that lead to an outcome that can be quantified (Biswas and Kirchherr 2016). Finlayson and Hayward (2012) have argued that in changing the university, “four different rationales have been put forward by successive administrations or their appointed advisors for these reforms: 1. Expansion, 2. Efficiency, 3. Economic accountability – i.e. value for money, 4. Political accountability – i.e. democratisation or widening participation”.  These are demonstrated most clearly in the notion of “impact”. Stefan Collini, for example, describes how in the REF (Research Excellence Framework) consultation document 37 different “impact indicators” are outlined for assessing the university sector, most of which serve to promote economic or utilitarian interests,

nearly all of these refer to “creating new businesses”, “commercialising new products or processes”, attracting “R&D investment from global business”, informing “public policy-making” or improving “public services”, improving “patient care or health outcomes”, and improving “social welfare, social cohesion or national security” (a particularly bizarre grouping). Only five of the bullet points are grouped under the heading “Cultural enrichment”. These include such things as “increased levels of public engagement with science and research (for example, as measured by surveys)” and “changes to public attitudes to science (for example, as measured by surveys)”. The final bullet point is headed “Other quality of life benefits”: in this case, uniquely, no examples are provided. The one line under this heading simply says “Please suggest what might also be included in this list” (quoted in Finlayson and Hayward 2012).

Indeed, more recently Collini (2017) has described the events leading up to the emergence of what has come to be called the “impact agenda”. This is the idea that research should be shown to be socially beneficial and economically useful. Collini describes how Gordon Brown, then at the Treasury, was being lobbied by businesses who sought to change the incentives of the universities towards short-term, preferably commercial, impact-led innovation. This led to “impact” being added to the research assessment process of the REF, which many have argued, deliberately shifts how the university understands itself as an institution.

Similarly, in the “2003 White Paper and the 2007 Annual Review of the Science and Innovation Investment Framework that, in spite of one or two passing remarks about the value of education, the Government’s overriding concern is to harness and increase the economic impact of research… All the government reviews, papers and reports in the period are about how to make Higher Education serve the needs of the knowledge economy” (Finlayson and Hayward 2012). These kinds of claims and arguments are often related to the notion of the emergence of an information society, usually understood as a shift in Western economies from the production of goods to the production of innovation (see Berry 2008: 4). This is related to a similar notion of a knowledge-based economy which is built on the condition that there is knowledge, information and data freely flowing around that economy, and is structured in such a way as to allow exchange, aggregation, reuse and transformation, preferably with minimal forms of friction. Geert Lovink captures this well when he says that Google’s mantra is “let others do the work first that we won’t pay for. You write the book, we scan it and put our ads next to it” (Lovink 2016: 169). As Greenspan argued in 1996,

the world of 1948 was vastly different from the world of 1996. The American economy, more then than now, was viewed as the ultimate in technology and productivity in virtually all fields of economic endeavor [sic]. The quintessential model of industrial might in those days was the array of vast, smoke-encased integrated steel mills in the Pittsburgh district and on the shores of Lake Michigan. Output was things, big physical things. Virtually unimaginable a half-century ago was the extent to which concepts and ideas would substitute for physical resources and human brawn in the production of goods and services (Alan Greenspan, quoted in Perelman 2003).

Clive Humby has described a kind of process where “data is the new oil… Data is just like crude. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value” (Palmer 2006). Or as Wired put it, “like oil, for those who see data’s fundamental value and learn to extract and use it there will be huge rewards. We’re in a digital economy where data is more valuable than ever. It’s the key to the smooth functionality of everything from the government to local companies. Without it, progress would halt” (Toonders 2014). So this extractive metaphor, which is rich in illustrative description but which is limited for describing the process of creating, maintaining and using research, has nonetheless served to inspire governmental policy in numerous ways. For example, Meglena Kuneva, European Consumer Commissioner at the European Commission, has described personal data as “the new oil of the internet and the new currency of the digital world” (Kuneva 2009). Indeed, Hinssen 2012 uses the notion that “information is the new oil” and that we should be “drilling new sources of innovation”. Innovation in this sense, usually means changing or creating more effective processes, products and ideas for commercial exploitation. Naturally, the next step has been to connect the notion of data (or “open data” as it has been termed) to this extractive metaphor. Indeed, the Office for National Statistics (a producer of data sets) has argued that “if data is the new oil, Open Data is the oil that fuels society and we need all hands at the pump” (Davidson 2016). What makes data into open data, is that it is free of intellectual property restrictions that prevent it from being used by others by publishing constraints, such as copyright, and that it is machine readable. Open data, like open access publications and open source before them, relies on copyright licenses to grant the user the right to dice up and remix the textual or other digital materials in ways that can create new forms of innovative products. Under these conditions open access type works can be collected into a computer processable corpus to subject to pattern matching algorithms, Big Data analysis, free content to populate silicon valley apps and services, and other processing to make the “oil” into economic products.

In this sense, the knowledge economy is built on a contradictory set of principles, property rights to control intellectual products and processes (including digital rights management), and a mechanism to promote the “free” or “open” circulation of data and information. This contradiction is resolved if one understand them as not mutually antagonistic to each other, but rather as differing spheres or layers of the knowledge economy. With free data and information at the bottom, waiting to be exploited by entrepreneurs, and a thriving ecosystem of corporations living on top of this land. Indeed, within the academic literature and in governmental publications there is a tacit notion that the government, government-funded research and historical cultural materials (usually out-of-copyright but not digitised and sitting in archives) should become the freely available knowledge in a form that can be “mined” by the private sector in order to create economic growth. But to fully realise this vision requires that much more of the information and knowledge generated by, for example, universities and archives, will need to be opened up for innovation. This opening up, quite literally means providing in a digital form their materials without the kinds of copyright protections that have historically provided the stimulus for research publication, and which would be handed over to the private sector gratis. Indeed, these private sector corporations are driven by very different norms to the research university and certainly do not share its ethical commitment towards science and knowledge. Rather “the norms that guide how companies like… Google organise and disseminate knowledge are primarily market based and have little in common with the formative practices and intellectual virtues that constitute the core of the research university” (Wellmon 2015: 272).

One of the most influential descriptions of the workings of innovation has been the notion of “disruptive innovation” a term developed by Christensen (1997) and which describes a process by “which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves up market, eventually displacing established competitors” (Christensen 2017). This notion of disruptive innovation is now very much part of silicon valley ideology, and has become part of a discourse that has led calls for “disruption” in other sectors of the economy, from taxis, hotel, deliveries and education. Disruption theory has been connected with new ways of doing things that disturb, interrupt or cause to be restructure what is perceived to be a “closed” way of doing things, whether that through unionisation, monopoly or oligopoly behaviour, public sector or educational. Indeed, in regard to the possible disruption of the university sector, the Economist was keen to argue that technology “innovation is eliminating those constraints [of existing universities]… and bringing sweeping change to higher education” (Economist 2014). What is striking is how this notion of disruptive innovation is increasingly being mobilised in relation to the university, from government, industry and also from university management itself. But also how, often uncritically, many open access advocates are echoing the need for a disruptive innovation of university publication practices (see Mourik Broekman et al 2015).

This is an example of a how disruptive innovation in relation to the university sector has created the conditions under which university research outputs have been re-articulated as “not open”. There has subsequently been an attempt to argue that they must be “opened up” and as such they would be a resource which can be made available to others to contribute to innovation and economic growth. One of the most important examples is the Finch report of 2012, commissioned by the UK Government. This report drew on many of these themes where it made an explicit link between economic growth and the access to and use of publicly funded research. It argued, “most people outside the HE sector and large research-intensive companies – in public services, in the voluntary sector, in business and the professions, and members of the public at large – have yet to see the benefits that the online environment could bring in providing access to research and its results”. These innovations, it argues, are prevented because of “barriers to access – particularly when the research is publicly-funded – [and] are increasingly unacceptable in an online world: for such barriers restrict the innovation, growth and other benefits which can flow from research” (Finch 2012).

The “barriers to access” the report counter-intuitively identifies, are practices of publishing research in the public sphere in a form which has been enormously successful in transforming our societies over the last 350 years. Indeed, it is as if universities had, by publishing materials over this period, been actively seeking to create a closed system, rather than, as was actually the case, contributing to Enlightenment notions of a Republic of Letters and open science. Indeed, as Bernard Stiegler has pointed out, “every student who enrols in the final year of school [in France] is expected to know that the Republic of Letters was conditioned by the publishing revolution from which sprang gazettes and then newspapers, and that the philosophy of the Enlightenment that inspired the French Revolution itself emerged from this Republic of Letters” (Stiegler 2016: 235). Golumbia (2016: 77) similarly has observed that there is a real problem with open access advocates’ arguments that “what we have until the last decade or two called ‘publication’ somehow restricts access to information, rather than making [that] information more available”. These claims are not made more believable by the OA habit of picking one or two major journal publishers who have especially problematic practices of publication pricing strategies. This partial representation of the wider landscape of publishing and the use of selective, and often very emotive, cases to argue that all academic publication is against the public good, is damaging to academia as a whole as well as unsubstantiated. This aspect of proselytising of the virtues of open access without any concerns for its potential dangers is very reminiscent of the intense argumentation that has taken place with the FLOSS movement where similar zealotry has been observed (see Berry 2008).

Open access advocates often claim an alignment between open access and democratisation, participation and the public good, but to me this is only part of the story about why open access is now being promoted by government. Indeed, if one was under any confusion about why open access might be useful, Finch has helpfully laid this out,

support for open access publication should be accompanied by policies to minimise restrictions on the rights of use and re-use, especially for non-commercial purposes, and on the ability to use the latest tools and services to organise and manipulate text and other content

[government should seek to] extend the range of open access and hybrid journals, with minimal if any restrictions on rights of use and re-use for non-commercial purposes; and ensure that the metadata relating makes clear articles are accessible on open access terms.

It goes without saying that these moves seek to ensure that “innovative” products can be refined from research outputs that have no restrictions on their extraction, use, and exploitation. Finch also uncritically argues that universities should fund, in combination with research councils and government, research that could later be used free of restrictions by commercial users, without themselves contributing back into this open access repository. In effect, Finch is arguing for greater public subsidy for the private sector’s use of university research outputs. Indeed, the range of information from universities that Finch saw as available for exploitation includes “research publications…reports, working papers and other grey literature, as well as theses and dissertations… publications and associated research data” (Finch 2012).[2] Not only is Finch generalising the case for open access to all forms of output from university and related research institutions, she is also eager to assume that students’ MA dissertations and PhD theses are also fair game for commercial exploitation, without consideration of the ethical or legal implications of mining student work without their permission or consent. As Stiegler has argued, “the logic of the free and open (free software and hardware, open source, open science, open data, and so on), while initially conceived in order to struggle against the privatisation of knowledge and the plundering of those who possessed it, was able to be turned against the latter” and into a new form of proletarianisation (Stiegler 2016: 240).[3] Google and other companies have “touted their services as ‘free’ and available to all, but these companies are under pressure to return a profit to their investors” (Wellmon 2015: 272).

Open Access is too often presented as an unquestioned good, especially by its more zealous advocates (for a useful critique of this, see Golumbia 2016).[4] Following the push for mandating journal articles as open access across the UK higher education sector, for example, there is now a developing discourse of open access for monographs which tends to uncritically accept OA’s “progressive” benefits (see Crossick 2015). Indeed, this has now been confirmed as part of the UK REF for 2027 and this will be a major change to the way academics publish long form academic work, but also impact their control over their academic writings in book-form and represents a major change to academic practice. Although few authors makes much money from their monographs, books still nonetheless represent an independent income stream disconnected from their employers and helped to support and reinforce academic freedom. It should be noted that this is a proposal very much encouraged by university management, and has not been subject to sufficient critical attention by the academic community who are often distracted by the claims to “democracy” or “public culture” that open access is linked to. Indeed, as Fuller has argued, “public access to academic publications in their normal form is merely a pseudo-benefit, given that most people would not know what to make of them (Fuller 2016).

In this short article I have sought to contribute to work that problematises open access ideas and places them within their specific historical location. By drawing links between government policies that have sought to reorient the university from its historical mission related to research and understanding to that of economic growth and impact, one begins to see a new alignment of power and knowledge. Open access appears at a time when digital technologies are changing the contours of the dissemination of knowledge and information and are also challenging the publishing industry with new means of publication. Therefore “granting companies… the authority to distribute, even as platforms and not necessarily owners, university-produced knowledge could cede control over the dissemination and organisation of knowledge to institutions primarily oriented to profit-making” (Wellmon 2015: 272). Indeed, OA cannot be understood without seeing it within this wider historical constellation, and consequently its advocates’ attempts to depoliticise it by placing it within a moral category, that is, as an obvious good, is extremely concerning and needs urgent critique. Additionally, as Fuller argues, “much of the moral suasion of the open access movement would be dissipated if it complained not only about the price of academic journals but also the elite character of the peer-review process itself… in effect open access is making research cheaper to those who already possess the skills to [use it]…” (Fuller 2016). Open access raises important questions about how publications can better reach publics and audiences, but by exaggerating its advantages and dismissing its disadvantages, it becomes ideological and therefore unreflexive about its uses in the current restructuring of the university and knowledge in the 21st century.

Notes

[1] Rockhill (2017) has written about how these changes in the university diminish the range of critical voices that historically were found in the academy. Indeed, he suggests that they “should invite us to think critically about the current academic situation in the Anglophone world and beyond, [for example]… the ways in which the precarization of academic labor contributes to the demolition of radical leftism. If strong leftists cannot secure the material means necessary to carry out our work, or if we are more or less subtly forced to conform in order to find employment, publish our writings or have an audience, then the structural conditions for a resolute leftist community are weakened”.  Similarly, Golumbia has argued that “depriving professors of the opportunity to earn money for their own creative and scholarly productions is one of the best ways to eviscerate what is left of the professiorate” (Golumbia 2013). 
[2] Finch argued further and completely bizarrely that “we therefore expect market competition to intensify, and that universities and funders should be able to use their power as purchasers to bear down on the costs to them both of APCs and of subscriptions” (Finch 2012). The idea that a smaller number of academic purchasers would drive down prices by paying for production rather than consumption of research publications was presented with no evidence except for the self-evidence of the claim. 
[3] Stiegler also quotes that “Catherine Fisk, a lawyer, has gone through old trials in the US in which employers and employees confronted each other over the ownership of ideas. In the early 19th century, courts tended to uphold the customary right of works to freely make use of knowledge gained at the workplace, and attempts by employers to claim the mental faculties of trained white workers were rejected by courts because this resembled slavery too closely. As the knowhow of workers became codified and the balance of power shifted, courts began to vindicate the property claims of employers” (Stiegler 2016: 240).
[4] Andrew Orlowski has argued a similar point in relation to free culture advocates in cultural production, “unfortunately for the creative industries, there’s money and prestige to be gained from promoting this baffling child-like view [that the creative economy exists to deprive people of publicly owned goods]. The funds that cascade down from Soros’ Open Society Initiative into campaigns like A2K, or from the EU into NGOs like Consumer International, or even from UK taxpayers into quangos like Consumer Focus, all perpetuate the myth that there’s a ‘balance’: that we’ll be richer if creators are poorer, we’ll have a more-free society if we have fewer individual rights, and that in the long-term, destroying rewards for creators is both desirable and ‘sustainable’” (Orlowski 2012).

Bibliography

Allington, D. (2013) On open access, and why it’s not the answer, http://www.danielallington.net/2013/10/open-access-why-not-answer/

Berry, D. M. (2008) Copy, Rip, Burn: The Politics of Copyleft and Open Source, London: Pluto Press.

Biswas,A. and  Kirchherr,J. (2016) The Tough Life of an Academic Entrepreneur: Innovative commercial and non-commercial ventures must be encouraged, LSE Blog, http://blogs.lse.ac.uk/impactofsocialsciences/2016/02/16/the-tough-life-of-an-academic-entrepreneur/

Mourik Broekman, P., Hall, G., Byfield, T. Hides, S. and Worthington, S. (2015) Open Education: A Study in Disruption, London: Rowman and Littlefield.

Collini, S. (2012) What Are Universities For?, London: Penguin.

Collini, S. (2017) Speaking of Universities, London: Verso.

Christensen, C. M. (1997), The innovator’s dilemma: when new technologies cause great firms to fail, Boston, Massachusetts, USA: Harvard Business School Press
Christensen, C. M. (2017) Disruptive Innovation, http://www.claytonchristensen.com/key-concepts/
Crossick, G. (2015) Monographs and Open Access: A report to HEFCE, HEFCEhttp://www.hefce.ac.uk/media/hefce/content/pubs/indirreports/2015/Monographs,and,open,access/2014_monographs.pdfDavidson, R. (2016) Open Data is the new oil that fuels society, Office for National Statistics, https://blog.ons.digital/2016/01/25/open-data-new-oil-fuels-society/

Economist (2014) Massive open online forces, The Economisthttps://www.economist.com/news/finance-and-economics/21595901-rise-online-instruction-will-upend-economics-higher-education-massive

Finch (2012) Accessibility, sustainability, excellence: how to expand access to research publications, Report of the Working Group on Expanding Access to Published Research Findings, https://www.acu.ac.uk/research-information-network/finch-report-executive-summary

Finlayson, G. and Hayward, D. (2012) Education towards heteronomy: a critical analysis of the reform of UK universities since 1978, libcom.orghttps://libcom.org/history/education-towards-heteronomy-critical-analysis-reform-uk-universities-1978
Fuller, S. (2016) Academic Caesar, London: Palgrave Macmillan.

Golumbia, D. (2013) On Allington on Open Access, uncomputing, http://www.uncomputing.org/?p=288

Golumbia, D. (2016). Marxism and Open Access in the Humanities: Turning Academic Labor against Itself, Workplace, 28, 74-114.
Hinssen, P. (2012) (Ed.) Information is the New Oil: Drilling New Sources of Innovation, http://datascienceseries.com/assets/blog/GREENPLUM_Information_is_the_new_oil-LR.pdfHolmwood, J. (2011) (Ed.) A Manifesto for the Public University, London: Bloomsbury Academic.
Kuneva, M. (2009) Keynote Speech, Roundtable on Online Data Collection, Targeting and Profiling, http://europa.eu/rapid/press-release_SPEECH-09-156_en.htmLovink, G. (2016) Social Media Abyss: Critical Internet Cultures and the Forces of Negation, Cambridge: Polity.

Orlowski, A. (2012) Popper, Soros, and Pseudo-Masochism, http://andreworlowski.com/2012/05/02/popper-soros-and-pseudo-masochism/

Palmer, M. (2006) Data is the New Oil, http://ana.blogs.com/maestros/2006/11/data_is_the_new.html

Perelman, M. (2003) ‘The Political Economy of Intellectual Property’, Monthly Review 54 (8): 29–37.

Readings, B. (1996) The University in Ruins, London: Harvard University Press.
Rockhill, G. (2017) The CIA Reads French Theory: On the Intellectual Labor of Dismantling the Cultural Left, The Philosophical Salonhttp://thephilosophicalsalon.com/the-cia-reads-french-theory-on-the-intellectual-labor-of-dismantling-the-cultural-left/Stiegler, B. (2016) The Automatic Society: The Future of Work, Cambridge: Polity.

Toonders, Y. (2014) Data is the New Oil of the Digital Economy, Wired, https://www.wired.com/insights/2014/07/data-new-oil-digital-economy/

Wellmon, C. (2015) Organizing Enlightenment: Information overload and the Invention of the Modern Research University, Baltimore: John Hopkins University Press.

Towards an Idea of Universitality

Ruins of Plato’s Academy

What would it mean to reclaim the university from its ruins? To revisit what were considered the fundamental conditions of the social epistemology of the university without falling into the trap of nostalgia and traditionalism? In this short post I want to think about what might be the content of a notion of what I am calling universitality, understood precisely as the conceptualisation of a constellation of thought and practice manifested through multiple histories, practices, institutions and bodies related to the idea of a university (Readings 1996; Rothblatt 1972; Thelin 2011, Whyte 2016). By means of a set of hodos, I intend to examine the notion of a university, with a view to developing a new conceptualisation in response to the contemporary crisis of the university, but also in terms of the crisis of its epistemology in and through the university in its modern corporate form.

This is to rethink the university in light of the more recent challenge to universities and collegiality. To turn a critical eye over the return of a philosophy of utility which hangs over the fate of universities in the 21st century and which dates back to before the founding of the University of London (Collini 2012; Holmwood 2011). Not to say, of course, that this is necessarily a new threat to the university (Newman 1996; Shils 1972). Indeed, the history of the university has also been a history of thought against power, reason against utility, until in the 20th and 21st century thought and reason become themselves instrumentalised in the service of a project of economism driven in part by computationalism and neoliberalism. But what I explicitly seek to do in this article, in contrast to Collini (2017: 24), is to “propose some ideal or essence, some way of distinguishing supposedly ‘real’ universities from institutions that do not deserve the name”. In other words, by making a cut, which here Collini (2017) is reluctant to do, one develops the means of describing and classifying what we might call the university-ness of a university. This is, by its nature an exercise in genealogy as much as description, but it is also about recovering an idea of the university that seems to be all but forgotten, and without which we struggle to articulate a sense of an idea of what a university is for.

I draw the notion of universitality from the Latin Universitatis, in the particular sense of Studium Generale (understood as a place where students came to study) and more particularly as Magistrorum et Discipulorum (e.g. master and scholars, where scholars here means students). A universitatis is a form of organisation that can own and control a groups’ property in common for its members and which has a set of rules and regulations which the masters and students must conform to be accepted into the guild. Indeed, this is the etymological source of the notion of ‘university’ which was originally formed of a corporate body formed of masters and students acting as a legal person. The universitatis typically exists where a resource is too large for a single member to administer or to provide temporal security beyond individuals’ lifetimes. These corporate bodies, subsequently granted by a royal decree, were similar to municipalities or guilds which would often own property, such as racetracks and theatres. What is important to note here is that the term university, is not drawn from universal or general knowledge, but rather from the generality of the people who can study within the universitas. The idea was that a universitas could be joined by anyone capable of profiting from being there, that is without distinction of class, age, rank or previous occupation. So the universitas was understood as a specific form of corporation or society, hence the notion of members of the society being identified as socii (e.g. Fellows, a term still used at Oxford and Cambridge, and elsewhere for visiting academics). In this understanding of the university only the Fellows are essential to the university, and they are tasked with the search after knowledge, to advance knowledge and to possess knowledge for themselves. Indeed, historically, the role of the university has been closely associated with the production of knowledge, right up to present times. But universitatis also created the conditions for particular epistemologies and particular ways of seeing.

Here are clues to the first aspect of universitality, the notion that those who make up its core are a community of associates, the masters, dedicated to the advancement of knowledge, understanding and learning. Built around this core group, are the structures of the buildings, the libraries, and the scholars or students who are instructed, trained, educated but also tested, licensed, and qualified for competence by the masters. This is also the basis for the assertion by Kant that the university is ruled by an idea of reason emerging from philosophy, in other words, with infinity (see Derrida 2004: 83-112). Kant outlined this argument of the nature of the university in 1798, under the notion of The Conflict of the Faculties. He argued that all of the university’s activities should be organised through a single regulatory idea – the concept of reason. Kant argued that reason and the state, knowledge and power, could be unified in the university by the production of individuals capable of rational thought and republican politics – the students trained for the civil service and society. This is the beginning of the modern notion of a university and with it the development of both objective and subjective attempts to shape knowledge and learning towards the needs of modernity and its complex society. With this we see the development of the second aspect of the concept of universitality, the idea that a specific social epistemology of a scholarly community is regulated by the notion of reason.

Bibliography

Collini, S. (2012) What Are Universities For?, London: Penguin.

Collini, S. (2017) Speaking of Universities, London: Verso.

Derrida, J. (2004) Mochlos; or, The Conflict of the Faculties, in The Eyes of the University, Stanford University Press.

Holmwood, J. (2011) (Ed.) A Manifesto for the Public University, London: Bloomsbury Academic.

Kant, I. (1991) The Conflict of the Faculties, in Kant, I., Kant: Political Writings, Cambridge University Press.

Newman, J. H. (1996) The Idea of a University, Yale University Press.

Readings, B. (1996) The University in Ruins, London: Harvard University Press.

Rothblatt, S. (1972) The Modern University and its Discontents: The Fate of Newman’s Legacies in Britain and America, Cambridge: Cambridge University Press.

Shils, E. (1972) Intellectuals and the Powers and Other Essays, The University of Chicago Press.

Thelin, J. R. (2011) A History of American Higher Education, Baltimore: John Hopkins University Press.

Whyte, W. (2016) Redbrick: A Social and Architectural History of Britain’s Civic Universities, Oxford: Oxford University Press.

The Digital Humanities Stack

Thinking about the structure of the digital humanities, it is always helpful if we can visualise it to provide some sort of map or overview. Here, I am exploring a way of representing the digital humanities through the common computer science technique of a software “stack“. This is the idea that a set of software components provides the infrastructure for a given computer system or platform. In a similar way, here I illustrate the discipline of digital humanities with a pictorial representation of the layers of abstraction in the image given below. This gives the reader an idea of what I am calling the digital humanities stack.

The Digital Humanities Stack, illustration by Marcus Leis Allion  (Berry 2016)

This type of diagram is common in computation and computer science to show how technologies are “stacked” on top of each other in growing levels of abstraction. Here, I use the method in a more illustrative and creative sense of showing the range of activities, practices, skills, technologies, and structures that could be said to make up the digital humanities as an ideal type. This is clearly a simplification, and is not meant to be prescriptive, rather it is aimed to be helpful for the newcomer to the digital humanities as it helps to understand how the varied elements that make up the digital humanities fit together. Whilst I can foresee criticisms about the make-up and ordering of this stack that I present here, nonetheless, I think it, more or less, provides a useful visual guide to how we can think about the various components of a digital humanities and contributes towards further understanding digital humanities. I deliberately decided to leave out the “content” elements in terms of the specificity, for example, of the different kinds of digital archive that we see across the digital humanities. I think that this is acceptable as the term digital archive does, I think, capture a wide range of digital databases and archival forms, although perhaps does not strongly enough signify the related material elements, for example in a “postdigital archive” that includes both digital and non-digital element. Relatedly, this diagram does not capture sufficiently, perhaps, something like the inclusion of a media archaeological collection in its materiality.

So this diagram can be read as the bottom levels indicating some of the fundamental elements of the digital humanities stack, such as computational thinking and knowledge representation, and then other elements that later build on these. Of course, diagrams simplify and even though I would have preferred for the critical and cultural critique to run through more of the layers, in the end it made for a more easily digestible visual representation if I didn’t over-complicate the diagram. The illustration here stretches the concept of a stack, in a strict computer science manner, as it includes institutional layers and non-computational elements, but as a heuristic for thinking about the digital humanities in its specificity, I think it can be helpful. As a version 1.0 of the digital humanities stack I look forward to reworkings of it and complication and re-articulations in the comments.

New Book: Digital Humanities

New book, Digital Humanities, authored by David M. Berry and Anders Fagerjord, on Polity under production and available in Apr 2017.

Signal Lab

As part of the Sussex Humanities Lab, at the University of Sussex, we are developing a research group clustered around information theoretic themes of signal/noise, signal transmission, sound theorisation, musicisation, simulation/emulation, materiality, game studies theoretic work, behavioural ideologies and interface criticism. The cluster is grouped under the label Signal Lab and we aim to explore the specific manifestations of the mode of existence of technical objects. This is explicitly a critical and political economic confrontation with computation and computational rationalities.

Signal Lab will focus on techno-epistemological questions around the assembly and re-assembley of past media objects, postdigital media and computational sites. This involves both attending to the impressions of the physical hardware (as a form of techne) and the logical and mathematical intelligence resulting from software (as a form of logos). Hence we aim to undertake an exploration of the technological conditions of the sayable and thinkable in culture and how the inversion of reason as rationality calls for the excavation of how techniques, technologies and computational medias direct human and non-human utterances without reducing techniques to mere apparatuses.

This involves the tracing of the contingent emergence of ideas and knowledge in systems in space and time, to understand distinctions between noise and speech, signal and absence, message and meaning. This includes an examination of the use of technical media to create the exclusion of noise as both a technical and political function and the relative importance of chaos and irregularity within the mathematization of chaos itself. It is also a questioning of the removal of the central position of human subjectivity and the development of a new machine-subject in information and data rich societies of control and their attendant political economies.

Within the context of information theoretic questions, we revisit the old chaos, and the return of the fear of, if not aesthetic captivation toward, a purported contemporary gaping meaninglessness. Often associated with a style of nihilism, a lived cynicism and jaded glamour of emptiness or misanthropy. Particularly in relation to a political aesthetic that desires the liquidation of the subject which in the terms of our theoretic approach, creates not only a regression of consciousness but also the regression to real barbarism. That is, data, signal, mathematical noise, information and computationalism conjure the return of fate and the complicity of myth with nature and a concomitant total immaturity of society and a return to a society in which self-relfection can no longer open its eyes, and in which the subject not only does not exist but instead becomes understood as a cloud of data points, a dividual and a undifferentiated data stream.

Signal Lab will therefore pay attention both to the synchronic and diachronic dimensions of computational totality, taking the concrete meaningful whole and essential elements of computational life and culture. This involves the explanation of the emergence of the present given social forces in terms of some past structures and general tendencies of social change. That is, that within a given totality, there is a process of growing conflict among opposite tendencies and forces which constitutes the internal dynamism of a given system and can partly be examined at the level of behaviour and partly at the level of subjective motivation. This is to examine the critical potentiality of signal in relation to the possibility of social forces and their practices and articulations within a given situation and how they can play their part in contemporary history. This potentially opens the door to new social imaginaries and political possibility for emancipatory politics in a digital age.

Signal

One of the key moments in the composition of the conditions of possibility for a digital abstraction, within which certain logical operations might be combined, performed and arranged to carry out algorithmic computation took place in 1961 when James Buie who was employed by Pacific Semiconductor patented the Transistor Transistor Logic (TTL). This was an all transistor logic for analogue circuitry that crucially standardised the voltage configuration for digital circuitry (0v-5v). This represented a development from the earlier Diode–transistor logic (DTL) which used a diode network and an amplifying function performed by a transistor, and the even earlier Resistor–transistor logic (RTL) based on resistors which handled the input network and bipolar junction transistors (BJTs) as the switching devices. The key to these logic circuits was the creation of a representation of logic functions through the arrangement of the circuitry such that key boolean logic operations could be performed. TTL offered an immediate speed increase as the transition over a diode input is slower than using a transistor. With the creation of the TTL circuitry the logical operations of NAND and NOR allowed the modular construction of a number of boolean operations that themselves served as the components of microprocessor modules, such as the Adder.

I want to explore the importance of signal in relation to the interface between the underlying analogue carrier of the digital circuitry and the logical abstraction of digital computation – that is the maximisation of signal over noise in the creation of a digital signal carrier. It is exactly at this point that the emergence of digital computation is made possible, but also a suggestive link between signal/noise that points to the use of abstraction to minimise noise throughout the design of the digital computer, and which creates a logical universe within which computational thinking, that is signal without noise, or without noise as previously understood as thermal noise, is a constituent of programming practice. This is useful for developing an understanding between notions of materiality in theorising the digital, but also in making explicit the connection between digital “signal” and voltage “signal” or between the possibility of communication of information in a digital system.

At its most basic level standard TTL circuits require a 5-volt power supply which provides the framework within which a binary dichotomy is constructed to represent the true (1) and the false (0). The TTL signal is considered “low”, that is “false” or “0”, when the voltage is between the values of 0V and 0.8V (with respect to ground) and “high”, that is “true” or “1” when the voltage lies between 2.2V and 5V (called VCC to indicate that the top voltage is provided by the power supply, known as the positive supply voltage). Voltage which lies between 0.8V and 2.0V is considered “uncertain” or “illegitimate” and may resolve to either side of the binary division depending on the prior state of the circuitry or be filtered out by the use of additional circuitry. The range of voltages allows for manufacturing tolerances and instabilities of the material carrier, such that noise, uncertainty and glitches can be tolerated. This tripartite division creates the following diagram:

Tripartite division of voltage in TTL digital circuitry

This standardisation of the grammatisation of voltage creates the first and significant “cut” of the analogue world and one which was hugely important historically. By standardising the division of the binary elements of digital computation, in effect, the interoperability of off-the-shelf digital circuits becomes possible, and thus instead of thinking in terms of electrical compatibility, voltage and so forth, the materiality of the binary circuit is abstracted away. This makes possible the design and construction of a number of key circuits which can be combined in innovative ways. It is crucial to recognise that from this point, the actual voltage of the circuits themselves vanishes into the background of computer design as the key issue becomes the creation of combination of logical circuits and the issues of propagation, cross-talk and noise emerge at the different level. In effect, the signal/noise problematic is raised to a new and different level.

Flat Theory

The world is flat.[1] Or perhaps better, the world is increasingly “layers”. Certainly the augmediated imaginaries of the major technology companies are now structured around a post-retina notion of mediation made possible and informed by the digital transformations ushered in by mobile technologies that provide a sense of place, as well as a sense of management of complex real-time streams of information and data.

Two new competing computational interface paradigms are now deployed in the latest version of Apple and Google’s operating systems, but more notably as regulatory structures to guide the design and strategy related to corporate policy. The first is “flat design” which has been introduced by Apple through iOS 8 and OS X Yosemite as a refresh of the ageing operating systems’ human computer interface guidelines, essentially stripping the operating system of historical baggage related to techniques of design that disguised the limitations of a previous generation of technology, both in terms of screen but also processor capacity. It is important to note, however, that Apple avoids talking about “flat design” as its design methodology, preferring to talk through its platforms specificity, that is about iOS’ design or OS X’s design. The second is “material design” which was introduced by Google into its Android L, now Lollipop, operating system and which also sought to bring some sense of coherence to a multiplicity of Android devices, interfaces, OEMs and design strategies. More generally “flat design” is “the term given to the style of design in which elements lose any type of stylistic characters that make them appear as though they lift off the page” (Turner 2014). As Apple argues, one should “reconsider visual indicators of physicality and realism” and think of the user interface as “play[ing] a supporting role”, that is that techniques of mediation through the user interface should aim to provide a new kind of computational realism that presents “content” as ontologically prior to, or separate from its container in the interface (Apple 2014). This is in contrast to “rich design,” which has been described as “adding design ornaments such as bevels, reflections, drop shadows, and gradients” (Turner 2014).

I want to explore these two main paradigms – and to a lesser extent the flat-design methodology represented in Windows 7/8 and the, since renamed, Metro interface (now Microsoft Modern UI) – through a notion of a comprehensive attempt by both Apple and Google to produce a rich and diverse umwelt, or ecology, linked through what what Apple calls “aesthetic integrity” (Apple 2014). This is both a response to their growing landscape of devices, platforms, systems, apps and policies, but also to provide some sense of operational strategy in relation to computational imaginaries. Essentially, both approaches share an axiomatic approach to conceptualising the building of a system of thought, in other words, a primitivist predisposition which draws from both a neo-Euclidian model of geons (for Apple), but also a notion of intrinsic value or neo-materialist formulations of essential characteristics (for Google). That is, they encapsulate a version of what I am calling here flat theory. Both of these companies are trying to deal with the problematic of multiplicities in computation, and the requirement that multiple data streams, notifications and practices have to be combined and managed within the limited geography of the screen. In other words, both approaches attempt to create what we might call aggregate interfaces by combining techniques of layout, montage and collage onto computational surfaces (Berry 2014: 70).

The “flat turn” has not happened in a vacuum, however, and is the result of a new generation of computational hardware, smart silicon design and retina screen technologies. This was driven in large part by the mobile device revolution which has not only transformed the taken-for-granted assumptions of historical computer interface design paradigms (e.g. WIMP) but also the subject position of the user, particularly structured through the Xerox/Apple notion of single-click functional design of the interface. Indeed, one of the striking features of the new paradigm of flat design, is that it is a design philosophy about multiplicity and multi-event. The flat turn is therefore about modulation, not about enclosure, as such, indeed it is a truly processual form that constantly shifts and changes, and in many ways acts as a signpost for the future interfaces of real-time algorithmic and adaptive surfaces and experiences. The structure of control for the flat design interfaces is following that of the control society, is “short-term and [with] rapid rates of turnover, but also continuous and without limit” (Deleuze 1992). To paraphrase Deleuze: Humans are no longer in enclosures, certainly, but everywhere humans are in layers.

Apple uses a series of concepts to link its notion of flat design which include, aesthetic integrity, consistency, direct manipulation, feedback, metaphors, and user control (Apple 2014). Reinforcing the haptic experience of this new flat user interface has been described as building on the experience of “touching glass” to develop the “first post-Retina (Display) UI (user interface)” (Cava 2013). This is the notion of layered transparency, or better, layers of glass upon which the interface elements are painted through a logical internal structure of Z-axis layers. This laminate structure enables meaning to be conveyed through the organisation of the Z-axis, both in terms of content, but also to place it within a process or the user interface system itself.

Google, similarly, has reorganised it computational imaginary around a flattened layered paradigm of representation through the notion of material design. Matias Duarte, Google’s Vice President of Design and a Chilean computer interface designer, declared that this approach uses the notion that it “is a sufficiently advanced form of paper as to be indistinguishable from magic” (Bohn 2014). But magic which has constraints and affordances built into it, “if there were no constraints, it’s not design — it’s art” Google claims (see Interactive Material Design) (Bohn 2014). Indeed, Google argues that the “material metaphor is the unifying theory of a rationalized space and a system of motion”, further arguing:

The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space and in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts… Motion respects and reinforces the user as the prime mover… [and together] They create hierarchy, meaning, and focus (Google 2014). 

This notion of materiality is a weird materiality in as much as Google “steadfastly refuse to name the new fictional material, a decision that simultaneously gives them more flexibility and adds a level of metaphysical mysticism to the substance. That’s also important because while this material follows some physical rules, it doesn’t create the “trap” of skeuomorphism. The material isn’t a one-to-one imitation of physical paper, but instead it’s ‘magical'” (Bohn 2014). Google emphasises this connection, arguing that “in material design, every pixel drawn by an application resides on a sheet of paper. Paper has a flat background color and can be sized to serve a variety of purposes. A typical layout is composed of multiple sheets of paper” (Google Layout, 2014). The stress on material affordances, paper for Google and glass for Apple are crucial to understanding their respective stances in relation to flat design philosophy.[2]

Glass (Apple): Translucency, transparency, opaqueness, limpidity and pellucidity. 

Paper (Google): Opaque, cards, slides, surfaces, tangibility, texture, lighted, casting shadows. 

Paradigmatic Substances for Materiality

In contrast to the layers of glass that inform the logics of transparency, opaqueness and translucency of Apple’s flat design, Google uses the notion of remediated “paper” as a digital material, that is this “material environment is a 3D space, which means all objects have x, y, and z dimensions. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the viewer. Every sheet of material occupies a single position along the z-axis and has a standard 1dp thickness” (Google 2014). One might think then of Apple as painting on layers of glass, and Google as thin paper objects (material) placed upon background paper. However a key difference lies in the use of light and shadow in Google’s notion which enables the light source, located in a similar position to the user of the interface, to cast shadows of the material objects onto the objects and sheets of paper that lie beneath them (see Jitkoff 2014). Nonetheless, a laminate structure is key to the representational grammar that constitutes both of these platforms.

Armin Hofmann, head of the graphic design department at the Schule für Gestaltung Basel (Basel School of Design) and was instrumental in developing the graphic design style known as  the Swiss Style. Designs from 1958 and 1959. 

Interestingly, both design strategies emerge from an engagement with and reconfiguration of the principles of design that draw from the Swiss style (sometimes called the International Typographic Style) in design (Ashghar 2014, Turner 2014).[3] This approach emerged in the 1940s, and

mainly focused on the use of grids, sans-serif typography, and clean hierarchy of content and layout. During the 40’s and 50’s, Swiss design often included a combination of a very large photograph with simple and minimal typography (Turner 2014).

The design grammar of the Swiss style has been combined with minimalism and the principle of “responsive design”, that is that the materiality and specificity of the device should be responsive to the interface and context being displayed. Minimalism is a “term used in the 20th century, in particular from the 1960s, to describe a style characterized by an impersonal austerity, plain geometric configurations and industrially processed materials” (MoMA 2014). Robert Morris, one of the principle artists of Minimalism, and author of the influential Notes on Sculpture used “simple, regular and irregular polyhedrons. Influenced by theories in psychology and phenomenology” which he argued “established in the mind of the beholder ‘strong gestalt sensation’, whereby form and shape could be grasped intuitively” (MoMA 2014).[4]

Robert Morris: Untitled (Scatter Piece), 1968-69, felt, steel, lead, zinc, copper, aluminum, brass, dimensions variable; at Leo Castelli Gallery, New York. Photo Genevieve Hanson. All works this article © 2010 Robert Morris/Artists Rights Society (ARS), New York.

The implications of these two competing world-views are far-reaching in that much of the worlds initial contact, or touch points, for data services, real-time streams and computational power is increasingly through the platforms controlled by these two companies. However, they are also deeply influential across the programming industries, and we see alternatives and multiple reconfigurations in relation to the challenge raised by the “flattened” design paradigms. That is, they both represent, if only in potentia, a situation of a power relation and through this an ideological veneer on computation more generally. Further, with the proliferation of computational devices – and the screenic imaginary associated with them in the contemporary computational condition – there appears a new logic which lies behind, justifies and legitimates these design methodologies.

It seems to me that these new flat design philosophies, in the broad sense, produce an order in precepts and concepts in order to give meaning and purpose not only in the interactions with computational platforms, but also more widely in terms of everyday life. Flat design and material design are competing philosophies that offer alternative patterns of both creation and interpretation, which are meant to have not only interface design implications, but more broadly in the ordering of concepts and ideas, the practices and the experience of computational technologies broadly conceived. Another way to put this could be to think about these moves as being a computational founding, the generation of, or argument for, an axial framework for building, reconfiguration and preservation.

Indeed, flat design provides and more importantly serves, as a translational or metaphorical heuristic for both re-presenting the computational, but also teaches consumers and users how to use and manipulate new complex computational systems and stacks. In other words, in a striking visual technique flat design communicates the vertical structure of the computational stack, on which the Stack corporations are themselves constituted. But also begins to move beyond the specificity of the device as privileged site of a computational interface interaction from beginning to end. For example, interface techniques are abstracted away from the specificity of the device, for example through Apple’s “handoff” continuity framework which also potentially changes reading and writing practices in interesting ways.

These new interface paradigms, introduced by the flat turn, have very interesting possibilities for the application of interface criticism, through unpacking and exploring the major trends and practices of the Stacks, that is, the major technology companies. I think that further than this, the notion of layers are instrumental in mediating the experience of an increasingly algorithmic society (e.g. think dashboards, personal information systems, quantified self, etc.), and as such provide an interpretative frame for a world of computational patterns but also a constituting grammar for building these systems in the first place. There is an element in which the notion of the postdigital may also be a useful way into thinking about the question of the link between art, computation and design given here (see Berry and Dieter, forthcoming) but also the importance of notions of materiality for the conceptualisation deployed by designers working within both the flat design and material design paradigms – whether of paper, glass, or some other “material” substance.[5]

Notes

[1] Many thanks to Michael Dieter and Søren Pold for the discussion which inspired this post. 
[2] The choice of paper and glass as the founding metaphors for the flat design philosophies of Google and Apple raise interesting questions for the way in which these companies articulate the remediation of other media forms, such as books, magazines, newspapers, music, television and film, etc. Indeed, the very idea of “publication” and the material carrier for the notion of publication is informed by the materiality, even if only a notional affordance given by this conceptualisation. It would be interesting to see how the book is remediated through each of the design philosophies that inform both companies, for example. 
[3] One is struck by the posters produced in the Swiss style which date to the 1950s and 60s but which today remind one of the mobile device screens of the 21st Century. 
[4] There is also some interesting links to be explored between the Superflat style and postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime, both in terms of the aesthetic but also in relation to the cultural moment in which “flatness” is linked to “shallow emptiness”.
[5] There is some interesting work to be done in thinking about the non-visual aspects of flat theory, such as the increasing use of APIs, such as the RESTful api, but also sound interfaces that use “flat” sound to indicate spatiality in terms of interface or interaction design.  

Bibliography

Apple (2014) iOS Human Interface Guidelines, accessed 13/11/2014, https://developer.apple.com/library/ios/documentation/userexperience/conceptual/mobilehig/Navigation.html

Ashghar, T. (2014) The True History Of Flat Design, accessed 13/11/2014, http://www.webdesignai.com/flat-design-history/

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury.

Berry, D. M. and Dieter, M. (forthcoming) Postdigital Aesthetics: Art, Computation and Design, Basingstoke: Palgrave Macmillan.

Bohn, D. (2014) Material world: how Google discovered what software is made of, The Verge, accessed 13/11/2014, http://www.theverge.com/2014/6/27/5849272/material-world-how-google-discovered-what-software-is-made-of

Cava, M. D. (2013) Jony Ive: The man behind Apple’s magic curtain, USA Today, accessed 1/1/2014, http://www.usatoday.com/story/tech/2013/09/19/apple-jony-ive-craig-federighi/2834575/

Deleuze, G. (1992) Postscript on the Societies of Control, October, vol. 59: 3-7.

Google (2014) Material Design, accessed 13/11/2014, http://www.google.com/design/spec/material-design/introduction.html

Google Layout (2014) Principles, Google, accessed 13/11/2014, http://www.google.com/design/spec/layout/principles.html

Jitkoff, N. (2014) This is Material Design, Google Developers Blog, accessed 13/11/2014,  http://googledevelopers.blogspot.de/2014/06/this-is-material-design.html

MoMA (2014) Minimalism, MoMA, accessed 13/11/2014, http://www.moma.org/collection/details.php?theme_id=10459

Turner, A. L. (2014) The history of flat design: How efficiency and minimalism turned the digital world flat, The Next Web, accessed 13/11/2014, http://thenextweb.com/dd/2014/03/19/history-flat-design-efficiency-minimalism-made-digital-world-flat/

On Latour’s Notion of the Digital

Bruno Latour at Digital Humanities 2014

Bruno Latour, professor at Sciences Po and director of the TARDE program (Theory of Actor-network and Research in Digital Environments), recently outlined his understanding of the digital in an interesting part of his plenary lecture at Digital Humanities 2014 conference. He was honest in accepting that his understanding may itself be a product of his own individuation and pre-digital training as a scholar which emphasised close-reading techniques and agonistic engagement around a shared text (Latour 2014). Nonetheless, in presenting his attempt to produce a system of what we might call augmented close-reading in the AIME system, he was also revealing about how the digital was being deployed methodologically and his notion of the digital’s ontological constitution.[1]

Unsurprisingly, Latour’s first move was to deny the specificity of the digital as a separate domain as such, highlighting both the materiality of the digital and its complex relationship with the analogue. He described both the analogue structures that underpin the digital processing that makes the digital possible at all (the materials, the specific electrical voltage structures and signalling mechanisms, the sheer matter of it all), but also the digital’s relationship to a socio-technical environment. In other words, he swiftly moved away from what we might call the abstract materiality of the digital, its complex layering over an analogue carrier and instead reiterated the conditions under which the existing methodological approach of actor-network theory was justified – i.e. digital forms part of a network, is “physical” and material, requires a socio-techical environment to function, is a “complex function”, and so on.

Slide drawn from Latour (2014)

It would be too strong, perhaps, to state that Latour denied the specificity of the digital as such, but rather through what we might unkindly call a sophisticated technique of bait and switch and the use of a convincingly deployed visualisation of what the digital “really” is, courtesy of an image drawn from Cantwell-Smith (2003) the digital as not-physical was considered to have been refuted. Indeed, this approach to the digital echoes his earlier statements from 1997 about the digital, such that Latour argues,[2]

I do not believe that computers are abstract… there is (either) 0 and (or) 1 has absolutely no connection with the abstractness. It is actually very concrete, never 0 and 1 (at the same time)… There is only transformation. Information as something which will be carried through space and time, without  deformation, is a complete myth. People who deal with the technology will actually use the practical notion of transformation. From the same bytes, in terms of ‘abstract encoding’, the output you get is entirely different, depending on  the medium  you use. Down with information (Lovink and Schultz 1997).

This is not a new position for Latour, indeed in earlier work he has stated “actually there is nothing entirely digital in digital computers either!” (original emphasis, Latour 2010a). Whilst this may well be Latour’s polemical style getting rather out of hand, it does raise the question about what it is that is “digital” for Latour and therefore how this definition enables him to make such strong claims. One is tempted to suppose that it is the materiality of the 0 and 1s that Cantwell Smith’s diagram points towards that enables Latour to dismiss out of hand the complex abstract digitality of the computer as an environment, which although not immaterial, still is located through a complex series of abstraction layers which actually do enable programmers to work and code in an abstract machine disconnected in a logical sense from the materiality of the underlying silicon. Indeed, without this abstraction within the space of digital computers there could be none of the complex computational systems and applications that are built today on abstraction layers. Here space is deployed both in a material sense as the shared memory abstracted across both memory chips and the hard disk (which itself may be memory chips) and as a metaphor for the way in which the space of computation is produced through complex system structures that enable programmers to work as programmers working within a notionally two-dimensional address space that is abstracted onto a multidimensional structure.

The Digital Iceberg (Berry 2014)

In any case, whilst our attention is distracted by his assertion, Latour moves to cement his switch by making the entirely reasonable claim that the digital lies within a socio-technical environment, and that the way to study the digital is therefore to identify what is observable of the digital. This he claims are “segments of trajectories through distributed sets of material practice only some of which are made visible through digital traces”, thus he claims the digital is digital less as a domain and more as a set of practices. This approach to studying the digital is, of course, completely acceptable, providing one is cognisant of the way in which the digital in our post-digital world resembles the structure of an iceberg, with only a small part ever visible to everyday life – even to empirical researchers (see diagram above).  Otherwise, ethnographic approaches which a priori declare the abstractness of the digital as a research environment illegitimate, lose the very specificity of the digital that their well-meaning attempt to capture the materiality of the digital calls for. Indeed, the way in which the digital through complex processes of abstraction is then able to provide mediators to and interfaces over the material is one of the key research questions to be unpacked when attempting to get a handle on the increasing proliferation of the digital into “real” spaces. As such, ethnographic approaches will only ever be part of a set of research approaches for the study of the digital, rather than, as Latour claims, the only, or certainly most important research methodology.

This is significant because as the research agenda of the digital is heightened, in part due to financial pressures and research grants deployed to engage with digital systems, but also the now manifest presence of the digital in all aspects of life, and hence the deployment of methodological and theoretical positions on how such phenomena should be studied. Should one undertake digital humanities or computational social science? Digital sociology or some other approach such as actor-network theory? In his claim that “the more thinking and interpreting becomes traceable, the more humanities could merge with other disciplines” reveals the normative line of reasoning that (digital) humanities specificity as a research field could be usurped or supplemented by approaches that Latour himself thinks are better at capturing the digital (Latour 2014). Indeed, Latour claims in his book, Modes of Existence, that his project, AIME, “is part of the development of something known by the still- vague term ‘digital humanities,’ whose evolving style is beginning to supplement the more conventional styles of the social sciences and philosophy” (Latour 2013: xx).

To legitimate the claim of Latour’s flavour of actor-network theory as a research approach to the digital, he refers to Boullier’s (2014) work, Pour des sciences social de çéme génération, that there have been three ages of social context, with the latest emerging from the rise of digital technologies and the capture of digital traces they make possible. They are,

Age 1: Statistics and the idea of society 

Age 2: Polls and the idea of opinion 

Age 3: Digital traces and the idea of vibrations (quoted in Latour 2014).

Here, vibration follows from the work of Gabriel Tarde in 1903 who referred to the notion of “vibration” in connection to an empirical social science of data collection, arguing that,

If Statistics continues to progress as it has done for several years, if the in-formation which it gives us continues to gain in accuracy, in dispatch, in bulk, and in regularity, a time may come when upon the accomplishment of every social event a figure will at once issue forth automatically, so to speak, to takeits place on the statistical registers that will be continuously communicatedto the public and spread abroad pictorially by the daily press. Then, at every step, at every glance cast upon poster or newspaper, we shall be assailed, asit were, with statistical facts, with precise and condensed knowledge of allthe peculiarities of actual social conditions, of commercial gains or losses, of the rise or falling off of certain political parties, of the progress or decay of a certain doctrine, etc., in exactly the same way as we are assailed when weopen our eyes by the vibrations of the ether which tell us of the approach or withdrawal of such and such a so-called body and of many other things of a similar nature (Tarde 1962: 167–8).

This is the notion of vibration Latour deploys, although he prefers the notion of sublata (similar to capta, or captured data) rather than vibration. For Latour, the datascape is that which is captured by the digital and this digitality allows us to view a few segments, thus partially making visible the connections and communications of the social, understood as an actor-network. It is key here to note the focus on the visibility of the representation made possible by the digital, which becomes not a processual computational infrastructure but rather a set of inscriptions which can be collected by the keen-eyed ethnographer to help reassemble the complex socio-technical environments that the digital forms a part of. The digital is, then, a text within which are written the traces of complex social interactions between actants in a network, but only ever a repository of some of these traces.

Latour finishes his talk by reminding us that the “digital is not a domain, but a single entry into the materiality of interpreting complex data (sublata) within a collective of fellow co-inquirers”. Reiterating his point about the downgraded status of the digital as a problematic within social research and its pacification through its articulation as an inscription technology (similar to books) rather than a machinery in and of itself, shows us again, I think, that Latour’s understanding of the digital is correspondingly weak.

The use of the digital in such a desiccated form points to the limitations of Latour’s ability to engage with the research programme of investigating the digital but also the way in which a theologically derived close-reading method derived from bookish practice may not be entirely appropriate for unpacking and “reading” computational media and software structures.[3] It is not that the digital does not leave traces, as patently it does, rather it is that these traces are encoded in such a form, at such quantities and high-resolutions of data compression that in many cases human attempts to read this information inscription directly are fruitless, and instead require the mediation of software, and hence a double-hermeneutic which places human researchers twice (or more) removed from the inscriptions they wish to examine and read.  This is not to deny the materiality of the digital, or of computation itself, but certainly makes the study of such matter and practices much more difficult than the claims to visibility that Latour presents. It also suggests that Latour’s rejection of the abstraction in and of computation that electronic circuitry makes possible is highly problematic and ultimately flawed.

Notes

[1] Accepting the well-designed look of the website that contains the AIME project, there can be no disputing the fact that the user experience is shockingly bad. Not only is the layout of the web version of the book completely unintuitive but the process of finding information is clumsy and annoying to use. One can detect the faint glimmer of a network ontology guiding the design of the website, an ontology that has been forced onto the usage of the text rather than organically emerging from use, indeed the philosophical inquiry appears to have influenced the design in unproductive ways. Latour himself notes: “although I have learned from studying technological projects that innovating on all fronts at once is a recipe for failure, here we are determined to explore innovations in method, concept, style, and content simultaneously” (Latour 2013: xx). I have to say that unfortunately I do think that there is something rather odd about the interface that means that the recipe has been unsuccessful. In any case, it is faster and easier to negotiate the book via a PDF file than through the web interface, or certainly it is better to keep ready to hand the PDF or the paper copy when waiting for the website to slowly grind back into life. 
[2] See also, Latour stating: “the digital only adds a little speed to [connectivity]. But that is small compared to talks, prints or writing. The difficulty with computer development is to respect the little innovation there is, without making too much out of it. We add a little spirit to this thing when we use words like universal, unmediated or global. But if way say that, in order to make visible a collective of 5 to 10 billion people, in the long history of immutable mobiles, the byte conversion is adding a little speed, which favours certain connections more than others, than this seems a reasonable statement” (Lovink and Schultz 1997).
[3] The irony of Latour (2014) revealing the close reading practices of actor-network theory as a replacement for the close reading practices of the humanities/digital humanities is interesting (see Berry 2011). Particularly in relation to his continual reference to the question of distant reading within the digital humanities and his admission that actor-network theory offers little by way of distant reading methods. Latour (2010b) explains “under André Malet’s guidance, I discovered biblical exegesis, which had the effect of forcing me to renew my Catholic training, but, more importantly, which put me for the first time in contact with what came to be called a network of translations – something that was to have decisive influence on my thinking… Hence, my fascination for the literary aspects of science, for the visualizing tools, for the collective work of interpretation around barely distinguishable traces, for what I called inscriptions. Here too, exactly as in the work of biblical exegesis, truth could be obtained not by decreasing the number of intermediary steps, but by increasing the number of mediations” (Latour 2010b: 600-601, emphasis removed).

Bibliography

Berry, D. M. (2011) Understanding Digital Humanities, Basingstoke: Palgrave Macmillan.

Cantwell Smith, B. (2003). Digital Abstraction and Concrete Reality. In Impressiones, Calcografia Nacional, Madrid.

Latour, B. (2010a) The migration of the aura or how to explore the original through its fac similes, in Bartscherer, T. (ed.) Switching Codes, University of Chicago Press.

Latour, B. (2010b) Coming out as a philosopher, Social Studies of Science, 40(4) 599–608.

Latour, B (2013) An inquiry into modes of existence : an anthropology of the moderns, Harvard University Press.

Latour, B. (2014) Opening Plenary, Digital Humanities 2014 (DH2014), available from http://dh2014.org/videos/opening-night-bruno-latour/

Lovink, G. and Schultz, P. (1997) There is no information, only transformation: An Interview with Bruno Latour, available from http://thing.desk.nl/bilwet/Geert/Workspace/LATOUR.INT

Tarde, G. (1903/1962) The Laws of Imitation, New York, Henry Holt and Company

Advertisements