Experimental Publishing Compendium Tools Practices Books About /



Evaluation of research by the scholarly community is essential to conducting and publishing research. In response to changing publication formats and to promote more relational forms of knowledge production, scholarly communities have started to reconsider blind peer review which has increasingly become the de facto standard for evaluating academic books. For example, scholars have turned to open and community review while developing new methods of assessing works in progress, digital multimodal research, and experimental academic books.

Full description

Although (blind) peer review is currently the gold standard for most humanities book-based research, the term itself is quite recent. Introduced in the 60s and 70s, it developed hand in hand with systems of metrics to control the measurement of academic prestige. Initially, scholarly communities maintained various academic refereeing systems as a form of self-governance. Once outsourced to commercial publishers, it was rebranded as peer review and incorporated as an audit and regulatory tool (Fyfe et al., 2017, Ross-Hellauer and Derrick, 2019). Critique of the double-blind model includes that it closes the author out of the conversation around the work, overrates the anonymity of authors and reviewers, and that it has "the effect of giving reviewers power without responsibility" (Godlee, 2000; Fitzpatrick, 2011). This 'veil of anonymity' and the assessment of research by only a select group of experts contributes to "the black box nature of blind peer review" and its lack of transparency and accountability (Ross-Hellauer, 2017). Although often idealised as impartial and objective concerning gender, nationality, institutional affiliation, or language, it doesn’t necessarily protect against reviewer bias, as the system has been ineffective in masking authorial identity. The digital environment has offered opportunities to improve research evaluation, leading to various experiments with online and open peer review that focus on discussing the research under review. Beyond evaluation, quality control, and gatekeeping practices, review practices in the humanities have predominantly focused on the "collaborative improvement of research," on constructive review and community knowledge production (Knöchelmann, 2019). Scholarly communities are conducting various experiments with new forms of peer review that contribute to the co-production of knowledge and adapt our evaluation systems to accommodate the myriad of forms and formats of book-based research.

Experimental uses

Open peer review is one of the more popular alternative assessment methods. Different and sometimes contrasting notions of open peer review are united by the ambition to rethink how we evaluate research in line with the ethos of open science. This, for example, includes "making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process" (Ross-Hellauer, 2017). Open peer review can stimulate interaction with books when it takes place on the book’s publication platform, or when it involves review on a more granular paragraph or sentence level. Open forms of peer review can be facilitated through a variety of means, many of which make use of commenting, annotation, or versioning, depending on the chosen mode of interaction with the book under review. More traditional forms of peer review maintain a separation between review and book, e.g. by using structured review forms, or book reviews published post publication. Digital annotation enables reviewers to write directly in or on the book under review, creating a more immediate and interactive experience. One notable example of open review is Kathleen Fitzpatrick’s Planned Obsolescence (2011), a book published and reviewed online on the MediaCommons platform that allows line-by-line public annotation of texts using the CommentPress Wordpress plugin. One recent example using this plugin is Mattering Press’s The Ethnographic Case edited by Emily Yates-Doerr and Christine Labuski, which is an experimental, online, open access book, that invited reader interaction in a process of post-publication peer review. In this context Fitzpatrick talks about alternative forms of "community-based authorisation" or crowdsourced review, that happen after publication instead of before. This opens review up beyond the opinions of a small selection of often senior scholars—which also runs the risk of being a system that breeds conservatism (e.g., towards emerging forms of knowledge)—and "lays bare" the mechanisms of review, making it more transparent, including about who the reviewers are (Fitzpatrick and Rowe, 2010). An additional benefit is that readers and authors are placed in a conversation, further "deepening the relationship between the text and its audience" (Fitzpatrick, 2012). Open peer review can help build communities around a book in a way that starts to elide the difference between author, reviewer, and reader. This asks for more collegiate approaches to review. Editors can facilitate this, supporting open conversations between author and reviewer as a collaborative process rather than one grounded in antagonism or gatekeeping. This was also MIT Press’s experience using collaborative community review on the PubPub platform (Staines, 2019). The HIRMEOS project implemented the hypothes.is plugin as an annotation service on the OpenEdition Books platform to conduct open post-publication peer review to create a space for conversation around publications and to stimulate new forms of peer review. In this well-documented project publishers were involved directly to act as moderators, write guidelines, and suggest reviewers (Bertino and Staines, 2019; Dandieu & HIRMEOS Consortium, 2019).

Increasingly guidelines to evaluate digital multimodal scholarship are established within different fields. These guidelines focus on evaluating works on their own merits, in the media they are produced, in an ongoing manner, and include technical, design, computational, and interface elements in their evaluation—from digital humanities projects, to archives, tools, and resources (Anderson and McPherson, 2011; Guiliano and Risam, 2019; Nyhan, 2020). Digital scholarship necessitates a reassessment of review practices as it differs from traditional single-author work, being "often collaborative," "rarely finished," and "frequently public," meaning that new assessment methods may be needed and appropriate (Risam, 2014). Our common linear publishing and evaluation workflows might need to be adapted to accommodate versioned and processual books, which would involve less assessment, validation, or gatekeeping, and more feedback to roll into the digital project’s next phase.


In the sciences open peer review has really taken off, yet in humanities book publishing we haven’t seen a similar development. One of the main drawbacks of open peer review is the tension between anonymity and openness. Open peer review can lead to the introduction of bias (e.g. gender bias) and self-censorship, as reviewers might blunt their critique in an open setting. There is also a power imbalance in open peer review. The anonymity in double-blind review can provide a protective function for early-career reviewers and authors in open forums. Another clear problem is creating a sufficiently large community around a book. There is a general reticence to partake in open peer review due to time-restraints, having to familiarise oneself with new technologies, and the lack of acknowledgement in reward and evaluation systems. Yet at the same time open peer review can make the academic labour and service work that is actually done by reviewers to support their fields more visible. In general however, a more substantial cultural switch might be needed, in which we start to focus more on seeing review as a contribution to collective knowledge production.

A challenge that also needs to be taken into consideration is the amount of editorial labour that comes into play with setting up open peer review systems, bringing together a community and moderating this process. The takeaways of the HIRMEOS project include the importance of community outreach activities (involving both publishers and authors) and the formulation of clear guidelines, user guides, and rules of good conduct. Workload (and whether it was part of their mission as publishers) was one of the biggest inhibitions to take part for publishers, where similarly 'lack of time' was the main reason for reviewers.

Further reading

Dandieu, C. and HIRMEOS Consortium (2019) Report on Post-Publication Open Peer Review Experiment. https://zenodo.org/record/3275651

Fitzpatrick, K. and Rowe, K. (2010) 'Keywords for Open Review'. LOGOS: The Journal of the World Book Community 21 (3–4), 133–141.

Ross-Hellauer, T. and Görögh, E. (2019) 'Guidelines for Open Peer Review Implementation'. Research Integrity and Peer Review 4 (1), 4.