Category Archives: book

The Minigraph: The Future of the Monograph?


It has taken digital a lot longer that many had thought to provide a serious challenge to print, but it seems to me that we are now in a new moment in which digital texts enable screen-reading, if it is not an anachronism to still call it that, as a sustained reading practice. Here, I am thinking particularly of the way in which screen technologies, including the high resolution retina displays common on iPhones, kindle e-ink, etc., combined with much more sensitive typesetting design practices in relation to text, are producing long-form texts that are pleasurable to read on a screen-based medium and as ebooks. This has happened most noticeably in magazine articles and longer newspaper features, but is beginning to drift over into well designed reading apps that we find on our mobile devices, such as Pocket and the “Reader” function in Safari. With this change, finally questions are seriously being asked about our writing practices, especially in terms of the assumptions and affordances that are coded into software word-processors, such as Microsoft Word, which assumes, if not enforces, a print medium mentality onto the writing practice. Word wants you to print the documents you write, and this prescriptive behaviour by the software, encourages us to “check” our documents on a “real” paper form before committing to it – even if the final form would have been a digital PDF format. The reason is that even the humble PDF is also designed for printing, as anyone who has tried to read a PDF document on a digital screen will attest, with its clunky and ill-formated structure, that actively fights against a user trying to resize a document to read. But when the reading practices of screen media are sufficient, then many of the assumptions of screen writing can be jettisoned, and with them the most disruptive and unpredictable will be the practice of writing for paper.

For there is little doubt that writing and reading the screen is different from print (see Berry 2012; Gold 2012). These differences are not just found at a technical level, for they also include certain forms of social practice, such as reading in public, passing around documents, sharing ideas and so forth. They also include the kinds of social signalling that digital documents have been very poor at incorporating into their structures, such as the cover, the publisher, the “name”, or a striking design or image. Nonetheless, certainly at the present phase of digital texts, I think it is in the typesetting and typography, combined with the social reading practices that take place, such as social sharing, marking, copying/pasting, and commenting, that make digital suddenly a viable way of creating and consuming textual works. In some ways, the social signalling of the cover artwork, etc. has been subsumed into social media such as Facebook and Twitter, but I think that it is a matter of time before this is incorporated into mobile devices in some way when the price of screen technologies, especially an e-ink back cover, can be built for pennies. But to return to the texts themselves, the question of writing, of putting pen to paper, an ironic phrase if ever there was one, is on the cusp of radical change. The long thirty year period of stable writing-software created by the virtual monopoly that Microsoft gained over desktop computers, most noticeably represented by Windows, its desktop operating system, and Office, its productivity suite, is drawing to a close. From its initial introduction in 1983 on the Xenix system as Multi-Tool Word and renamed that year to the familiar Microsoft Word that we all know today (and often hate), print has been the lode star of word-processor design.

The next stage of digital text is unveiling before our eyes, and as it does, many of the textual apparatus of print are migrating to the digital platform, and as they do so the advantages of new search and discovery practices make books extremely visible and usable again, such as through Google Books (Dunleavy 2012). There is still a lot of experimentation in this space and some problems still remain, for example there is currently not a viable alternative to the “chunking” process of reading that print has taught us through pages and page numbering, nor is there a means of book marking that is as convenient as the obviousness of the changing weight of the book as it moves through our hands, or the visual clues afforded through the page volume changing from unread to read as we turn the pages. However, this has been mitigated in some ways by a turning away from very long-form, in terms of book or monograph length texts of around 80,000 words, to the moderate long-form, represented by the 15-40,000 word text which I want to call the minigraph.

By minigraph I am seeking to distinguish a specific length of text and therefore size of book that is able to move beyond the very real limitations of the 6-8,000 word article, and yet is not at such as length that the chunking problem of reading digital texts becomes too much of a problem. In other words, in its current stage of implementation, I think that digital long-form texts are most comfortable to read when they stay within this golden ratio of 15-40,000 words, broken into five or six chapters. The lack of chunking is still a problem, in my opinion, without helpful “page” numbers, and I don’t think that paragraph numbering has provided a usable solution to this, but the shortness of the text means that it is readable within a reasonable period of time, creating a de facto chunking at the level of the minigraph chapter (between 2,000 and 5,000 words). Indeed, the introduction of an algorithmic paging system that is device-independent would also be helpful, for example through a notion of “planes” which are analogous to pages but calculated in real-time (see Note 1 below). This would help sidestep the problem of fatigue in digital reading, apparent even in our retina/e-ink screen practices, but also creates works that are long enough to be satisfying to read and can offer both interesting discussion, digression and scholarly apparatus as necessary. Other publishers have already been experimenting with the form, such as Palgrave with its Pivot series, a new e-book format “at 30,000 to 50,000 words, it’s longer than a journal article but shorter than a traditional monograph. The Palgrave Pivot, said Hazel Newton, head of digital publishing, ‘fills the space in the middle'” (Cassuto 2013). Indeed, Stanford University Press has also started “to release new material in the form of midlength e-books. ‘Stanford Briefs‘ will run 20,000 to 40,000 words in length” which Cassuto (2013) similarly calls the “mini-monograph”.

The next step is clearly, how should one write a minigraph, considering the likelihood that Microsoft Word will algorithmically prescribe paper norms, which in academia tend to either 7,000 articles or 70,000 monographs. Here, I think Dieter (2013) is right to make links with the writing practices of Book Sprints as a connecting thread to new forms of publishing (see Hyde 2013). The Book Sprint is a “genre of the ‘flash’ book, written under a short timeframe, to emerge as a contributor to debates, ideas and practices in contemporary culture… interventions that go well beyond a well-written blog-post or tweet, and give some substantive weight to a discussion or issue… within a range of 20-40,000 words” (Berry and Dieter 2012). This rapid and collaborative means of writing is a very creative and intensified form of writing, but it also tends towards the creation of texts that appear to be at an “appropriate” size for the digital medium which makes those writing practices possible in the first place. Book Sprints themselves are usually formed from 4-8 people actively involved in the writing process, and which are facilitated by another non-writing member, and which conveniently maps onto the structure of minigraph chapters discussed earlier. For Dieter, the Book Sprint is conducive to new writing practices, and by extension new reading practices, for network cultures, and therefore “formations that break from subjugation or blockages in pre-existing media and organizational workflows” (Dieter 2013). In this I think he is broadly correct, however, Book Sprints also point toward certain forms of affordance towards textual productions that are conducive to reading and writing in a digital medium, and in the context of this discussion, the word count of a minigraph.

Nick Montfort (2013) has suggested a new, predominantly digital, form of writing that enables different forms of scholarly communication, in his case that of the technical report, which he argues “is as fast as a speeding blog, as detailed and structured as a journal article, and able to be tweeted, discussed, assessed, and used as much as any official publication can be. It is issued entirely without peer review”. Montfort, however, connects the technical report to the “grey literature” that is not usually considered part of scholarly publishing as such. Experiments, such as the “pamphlets” issued by the Stanford Media Lab, and which Montford argues are all but technical reports in name, seem to lie at between 10-15,000 words in length, slightly longer than a journal article, and yet a little shorter than a minigraph.

However, a key difference, or at least in the form in which I am considering the minigraph as a viable form of scholarly production, is that neither the Book Sprint nor the technical report are peer-reviewed, although they might be “peer-to-peer reviewed” (see Cebula 2010; Fitzpatrick 2011). Rather, they are rapid production, sharing and collaborative forms of document geared towards social media and intervention or technical documentation. In contrast, the minigraph would share with the other main scholarly outputs, of the journal article and the monograph, the need to be peer-reviewed and production at a high level of textual quality. This is where the minigraph points to new emergent affordances of the digital that enable the kinds of scholarly activity, such as presenting finished work, carefully annotated and referenced, supported and discursively presented, through these new nascent digital textual technologies. That is, that if these intuitions are right about the current state of digital technologies and their affordances for the writing and reading of scholarly work, then the minigraph might be a potential object with the right structure and form for digital scholarship to augment that of the article, review, monograph and so forth. Indeed, the minigraph might offer exactly the right kind of compromise for scholarly work that is called for by, for example, Drucker (2013) and Nardone and Fitzpatrick (2013) and point towards the new possibilities for writing beyond the “article” or the “book” that Robertson (2013) describes as “scholarship” which are institutionally constraining on academic creativity.

In some ways the minigraph seems to be a much less radical suggestion than the multi-modal, all singing and dancing digital object that many have been calling for or are describing. However, the minigraph, as conceptualised here, is actually potentially deeply computational in form, more properly we might describe the minigraph as a code-object. In this sense, the minigraph is able to contain programmable objects itself, in addition to its textual load, opening up many possibilities for interactive dimensions to its use and suggested by the computational document format (CDF) created by Wolfram. The minigraph as described here does not, of course, exist as such, although its form is detectable in, for example, the documents produced by the Quip app, or the dexy format, as “literate documentation”, or the Booktype software. It is manifestly not meant to be in the form of Google Docs/Drive, which is essentially traditional word-processing software in the cloud, and which ironically still revolves around a print metaphor. The minigraph is then, a technical imaginary for what digital scholarly writing might be, and which remains to be coded into concrete software and manifested in the practices of scholarly writers and readers. Nonetheless, as a form of long-form text amenable to the mobile practices of readers today, the 15-40,000 word minigraph text could provide a key expressive scholarly form for the digital age.

Notes

[1] The minigraph chunks would be at 250-350 word intervals, roughly pages, and chapters of 2-5,000 words. There is no reason why the term “page” could not be used for these chunks, but perhaps “plane” is more appropriate in terms of chunks representing vertical “cuts” in the text at an appropriate frequency. So “plane 5” would be analogous to page 5, but mathematically calculable to approximately (300 x plane number) to give start word, and ((300 x plane number+1)-1) to give the end word of a particular plane.  This would make the page both algorithmically calculable and therefore device independent, but also suitable for scholarly referencing and produce usable user-friendly numbering throughout the text. As the planes are represented on screen by a digital, the numbering system would immediately be comprehended by existing users of printed texts, and therefore offer a simple transition from paper page based numbering to algorithmic numbering of documents. If the document was printed, the planes could be automatically reformatted to the page size, and hence further make the link between page and plane straightforward for the reader who might never realise the algorithmic source of the numbering system for plane chunks in a minigraph. Indeed, one might place the “plane resolution” within the minigraph text itself, in this case “300”, enabling different plane chunks to be used within different texts, and hence changing the way in which a plane is calculated on a book by book basis – very similar to page numbering. One might even have different plane resolutions within chapters in a book enabling different chunks in different chapters or regions. 

Bibliography

Berry, D. M. (2012) Understanding Digital Humanities, London: Palgrave.

Berry, D. M. and Dieter, M. (2012) Book Sprinting, accessed 14/08/2013, http://www.booksprints.net/2012/09/everything-you-wanted-to-know/

Cassuto, L (2013) The Rise of the Mini-Monograph, The Chronicle of Higher Education, accessed 18/08/2013, http://chronicle.com/article/The-Rise-of-the-Mini-Monograph/141007/

Cebula, L. (2010) Peer Review 2.0,  accessed 14/08/2013, North West Historyhttp://northwesthistory.blogspot.co.uk/2010/09/peer-review-20.html

Dieter, M. (2013) Book Sprints, Post-Digital Scholarship and Subjectivation, Hybrid Publishing Lab, accessed 14/08/2013, http://hybridpublishing.org/2013/07/book-sprints-post-digital-scholarship-and-subjectivation/

Dunleavy, P. (2012) Ebooks herald the second coming of books in university social science, LSE Review of Books, 18/08/2013, http://blogs.lse.ac.uk/lsereviewofbooks/2012/05/06/ebooks-herald-the-second-coming-of-books-in-university-social-science/

Drucker, J. (2013) Scholarly Publishing, Amodern, accessed 14/08/2013, http://amodern.net/article/scholarly-publishing-micro-units-and-the-macro-scale/

Fitzpatrick, K. (2011) Planned Obsolescence: Publishing, Technology, and the Future of the Academy, New York University Press.

Gold, M. K. (2012) Debates in the Digital Humanities, University of Minnesota Press.

Hyde, A. (2013) Book Sprints, accessed 14/08/2013, http://www.booksprints.net

Montfort, N. (2013) Beyond the Journal and the Blog, Amodern, accessed 14/08/2013, http://amodern.net/article/beyond-the-journal-and-the-blog-the-technical-report-for-communication-in-the-humanities/

Nardone, M., and Fitzpatrick, K. (2013) We Have Never Done It That Way Before, Amodern, accessed 14/08/2013, http://amodern.net/article/we-have-never-done-it-that-way-before/

Robertson, B. J. (2013) The Grammatization of Scholarship, Amodern, accessed 14/08/2013, http://amodern.net/article/the-grammatization-of-scholarship/

Advertisements

New Book: New Aesthetic, New Anxieties

New Aesthetic New Anxieties is the result of a five day Book Sprint organized by Michelle Kasprzak and led by Adam Hyde at V2_ from June 17–21, 2012. Authors: David M. BerryMichel van DartelMichael DieterMichelle KasprzakNat MullerRachel O’Reilly and José Luis de Vicente. Facilitated by: Adam Hyde

You can download the e-book as an EPUB, MOBI, or PDF.

EPUB: http://www.v2.nl/files/new-aesthetic-new-anxieties-epub

MOBI: http://www.v2.nl/files/new-aesthetic-new-anxieties-mobi

PDF: http://www.v2.nl/files/new-aesthetic-new-anxieties-pdf

Annotatable online version: http://www.booki.cc/new-aesthetic-new-anxieties/_draft/_v/1.0/preface/

The New Aesthetic was a design concept and netculture phenomenon launched into the world by London designer James Bridle in 2011. It continues to attract the attention of media art, and throw up associations to a variety of situated practices, including speculative design, net criticism, hacking, free and open source software development, locative media, sustainable hardware and so on. This is how we have considered the New Aesthetic: as an opportunity to rethink the relations between these contexts in the emergent episteme of computationality. There is a desperate need to confront the political pressures of neoliberalism manifested in these infrastructures. Indeed, these are risky, dangerous and problematic times; a period when critique should thrive. But here we need to forge new alliances, invent and discover problems of the common that nevertheless do not eliminate the fundamental differences in this ecology of practices. In this book, perhaps provocatively, we believe a great deal could be learned from the development of the New Aesthetic not only as a mood, but as a topic and fix for collective feeling, that temporarily mobilizes networks. Is it possible to sustain and capture these atmospheres of debate and discussion beyond knee-jerk reactions and opportunistic self-promotion? These are crucial questions that the New Aesthetic invites us to consider, if only to keep a critical network culture in place.

Tagged , , , ,

New Book: Life in Code and Software: Mediated life in a complex computational ecology

Life in Code and Software (cover image by Michael Najjar)

New book out in 2012 on Open Humanities PressLife in Code and Software: Mediated life in a complex computational ecology. 

 

This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. Life in Code and Software introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological, which I call computationality, and within which, code and software become the paradigmatic forms of knowing and doing. Such that other candidates for this role, such as: air, the economy, evolution, the environment, satellites, etc., are understood and explained through computational concepts and categories.

 

 

 

Code, Foucault and Neoliberal Governmentality

For Foucault, Neoliberal governmentality is a particular form of post-welfare state politics in which the state essentially outsources the responsibility for ensuring the ‘well-being’ of the population. The primary recipient of this responsibility is derived from a strengthened notion of the subject, as a rational individual. Indeed, these new subjectivities are expected to ‘look after themselves’. This form of governmentality has an extremely diffuse form of rule whereby strategies and imperatives of control are distributed through a variety of media but are implicated in even the most mundane practice of everyday life. As Schecter writes,

Foucault regards the exercise of power and the formalisation of knowledge to be intimately bound up with the constitution of living individuals as subjects of knowledge, that is, as citizens and populations about whom knowledge is systematically constructed… Subjects are not born subjects so much as they become them. In the course of becoming subjects they are classified in innumerable ways which contribute to their social integration, even if they are simultaneously marginalised in many cases (Schecter 2010: 171). 

So for example, the state promotes an ethic of self-care which is justified in terms of a wider social responsibility and which is celebrated through the examples given in specific moments represented as individual acts of consumption that contribute to a notion of good citizenship. So using recycling bins, caring for one’s teeth, stopping smoking, and so forth are all actively invested by the state as both detrimental to individual and collective care, but most importantly they are the responsibility of the citizen to abide by.

Neoliberal governmentality also gestures towards the subordination of state power to the requirements of the marketplace, the implication being that ‘political problems’ are re-presented or cast in market terms. Within this framework citizens are promised new levels of freedom, consumerism, customisation, interactivity and control over their life and possessions. In other words, they are promised an unfulfilled expectation as to the extent to which they are able to exert their individual agency.

In order to facilitate this governmental platform certain infrastructural systems need to be put in place, bureaucratic structures, computational agencies and so forth. For example, it has become increasingly clear that providing information to citizens is not sufficient for controlling and influencing behaviour. Indeed, people’s ability to understand and manipulate raw data or information has been found to be profoundly limited in many contexts with a heavy reliance on habit understood as part of the human condition.

It is here that the notion of compactants (computational actants) allows us to understand the way in which computationality has increasingly become constitutive of the understanding of important categories in late capitalism, like privacy and self-care. Here we could say that we are interested in a transition from the juridicification, through the medicalisation, to the ‘computationalisation’ of reason. Hence, following Foucault, we are interested in studying the formation of discrete powers rather than power in general. That is, Foucault is interesting ‘in the processes through which subjects become subjects, the truth becomes truth, and then changing conditions under which this happens, which in the first instance is the discrepancy between the visible and the readable’ (Schecter 2010: 173). Or as Foucault himself writes:

What is at stake in all this research about madness, illness, delinquency, and sexuality, as well as everything else I have been talking about today, is to show how the coupling of a series of practices with a truth regime forms an operative knowledge-power system (dispotif) which effectively inscribes in the real something that does not exist, and which subjects the real to a series of criteria stipulating what is true and what is false, whereby these criteria are taken to be legitimate. It is that moment which does not exist as real and which is not generally considered relevant to the legitimacy of a regime of true and false, it is that moment in things that engages me at the moment. It marks the birth of the asymmetrical bi-polarity of politics and economics, that is, of that politics and economics which are neither things that exist nor are errors, illusions or ideologies. It has to do with something which does not exist and which is nonetheless inscribed within the real, and which has great relevance for a truth regime which makes distinctions between truth and falsity (Foucault, The Birth of Bio-Politics, quoted in Schecter 2010: 173).

Indeed the way in which compactants generate certain notion of truth and falsity is a topic requiring close investigation, both in terms of the surface interface generating a ‘visible’ truth, and the notion of a computational, or cloud, truth that is delivered from the truth-machines that lie somewhere on the networks of power and knowledge.

Foucault suggests that if there is a ‘system’ or ensemble of systems, the task is somehow to think systemic functioning outside of the the perspective of the subject dominated by or in charge of the so-called system. Critical thinking can deconstruct the visible harmony between casual seeing and instrumental reason… in contrast with monolithic appearances, surfaces are characterised by strata and folds that can inflect power to create new truths, desires and forms of experience (Schecter 2010: 175).

Here we can make the link between sight and power, and of course sight itself is deployed such that the ‘visible’ is not transparent nor hidden. Compactants certainly contribute to the deployment of the visible, through the generation of certain forms of geometric and photographic truths manifested in painted screens and surfaces.

Bibliography

Schecter, D. (2010) The Critique of Instrumental Reason from Weber to Habermas, New York: Continuum.

New Book: Understanding Digital Humanities

The application of new computational techniques and visualisation technologies in the Arts & Humanities are resulting in fresh approaches and methodologies for the study of new and traditional corpora. This ‘computational turn’ takes the methods and techniques from computer science to create innovative means of close and distant reading. This edited book aims to discuss the implications and applications of what has been called Digital Humanities and the questions raised when using algorithmic techniques. Within this field there are important debates about the contrast between narrative versus database, pattern-matching versus hermeneutics, and the statistical paradigm versus the data mining paradigm. Additionally, new forms of collaboration within the Arts and Humanities are raised through modular Arts and Humanities research teams and new organisational structures (e.g. Big Humanities), together with techniques for collaborating in an interdisciplinary way with other disciplines (e.g. hard interdisciplinarity versus soft interdisciplinarity). This book draws from key researchers in the field to give a comprehensive introduction to some of the key debates and questions.

Brilliant tip when writing a book with Word Mac 2008

If like me you are in the middle of editing a huge word document, in my case a book of 70,000 words, then you are jumping all over the place and it is driving you mad. Word does not seem particularly helpful here and you keep losing place of where you are. Especially if you are editing anything academic where you keep needing to go to the bibliography to enter references as you edit the text.
Well with a bit of luck I discovered that if you click the weird little circle (called ‘Select Browse Object’ in Microsoftese) on the scroll bars (between the double up/down arrows) you can set the double arrows to jump to headings you have declared. Simply by setting the chapter headings to be Heading 1, including the bibliography, you can now zip around the document very quickly to move things and edit, etc. I tried using the bookmark function but it so poorly implemented that it doesn’t even compare to this browsing style..
Update: Even better tips include editing the preferences to turn off the fonts (so that they are not rendered which slows down the computer), and turn off the automatic word count, which also slows the computer to a crawl…
Advertisements