Vanishing publications: we hear from the experts working on the preservation of online scholarship (part two)
- Deborah Thorpe
- November 2, 2023
- 0 Comment
In the previous part in this series, I introduced the issue of vanishing open access journals, and spoke with Alicia Wise of CLOCKSS – who reminded us that this is not a problem that is specific to open access but that all online scholarship is at risk. Now, I’m going to turn now to Dr Rebekka Kieswetter and Dr Miranda Barnes to find out more about digital preservation in relation to Open Access monographs.
By Dr Deborah Thorpe, Research Data Steward, UCC Library
Dr Kieswetter is part of the ‘Experimental Publishing and Reuse’ Work Package of the Open Book Futures (OBF) project, the successor to the Community-led Open Publishing Infrastructures for Monographs (COPIM) project for open access book publishing. Dr Barnes works on the ‘Archiving and Digital Preservation’ Work Package. So, together they will speak about open access monographs, experimental modes of publishing, and digital preservation.
An interview with Rebekka Kieswetter and Miranda Barnes about preserving digital books
Deborah Thorpe: I have a question for you both: when we talk about open access research, we often think of journal articles, but there is a gradual move towards open access monographs. Is the long-term availability of these digital books something that researchers should be concerned about?
Rebekka Kieswetter and Miranda Barnes: This depends on the form and format of the monograph. Many open access monographs in the humanities are modelled after the static and bound print book and formatted as long-form pdfs and HTMLs, for example. In this case long-term preservation, within a digital preservation archive such as Portico or CLOCKSS, or archiving – for example in institutional repositories or on a publisher’s website – is relatively simple: a file package typically includes the book in one (or several) file format(s) (for example, PDF) together with a metadata file.
However, scholars are increasingly experimenting with digital platforms, tools, and technologies to communicate their research. They create what, in preservation terms, can be considered complex monographs. These include multi-modal books that contain text, data visualisations, audio, and video content, embedded third party contents (such as YouTube videos) and hyperlinks, for example; books that invite non-linear types of reader engagement; or living books that are published in different versions, which allow readers to annotate and comment. The volume and variety of formats, file types, data, etcetera included poses a challenge for the long-term preservation of complex monographs that often eschew standardisation and replicability demand a tailored approach that includes manual metadata input and the manual deposit of books and their supplementary parts, for example. Such an approach, especially for smaller publishers, creates problems of cost, time, responsibility, and thus also labour.
Discuss archiving and preservation options early in the publication process
For this reason, it is important for scholars – together with the publishers of their books – to start discussing archiving and preservation options already early in the publication process. These considerations may include thoughts about the possible output file formats that can be generated from the platforms used, and about which of these outputs to keep and preserve during the publishing process and beyond (for example, as complementary materials forming part of a publication).
Also, it is important to add that even traditionally formatted and published open access monographs may be at risk depending on whether or not the publisher has a pathway to archiving in multiple locations and/or digital preservation in a digital preservation archive (such as Portico or CLOCKSS). For example, many open access book publishers can access this pathway via OAPEN, who has a partnership with Portico to preserve all the books held in OAPEN. There is, however, a cost to this, so very small or academic-led presses may not have the money required as their endeavour is entirely voluntary. This is where OBF’s Archiving & Preservation work package is focusing their work.
How ‘experimental’ monograph publishing works
DT: Thank you. In your answer there, you point out that researchers are starting to embrace the more experimental types of publishing monographs. I would like to know more about this – can you go into a bit more detail about how this works?
RK and MB: With the emergence of digital and open access publishing, scholars, publishers, infrastructure providers, developers, and librarians have increasingly experimented with alternative tools, practices, and formats for scholarly publishing.
These experiments include, among other things: First, the redistribution of the governance over the technologies that scholarly knowledge relies on, for example by using open source tools, software, and platforms available or in development (for example, the publishing platforms Manifold, PubPub, and Scalar, or the collaborative annotation software hypothes.is). Second, challenging existing liberal humanist copyright regimes and conventional ways of publishing (humanities) research, including the notions of individual authorship, originality, and the ownership of research that these copyright regimes rely on (for example, by focusing on fostering interactions around books through open peer review, social annotations, or collaborative writing and editing). Third, questioning notions of the book as fixed and static (for example, through processual – or living – or multi-modal publications).
These experiments should not be considered as solely aesthetical or formal. Rather, these among other things, also entail a critique of and intervention into the knowledge-based economy pushed forward by commercial publishing conglomerates, as well as neoliberal universities, policy providers, and funding bodies; and the individualistic, competitive, and object-based ways in which scholarly knowledge creation and sharing is currently pursued.
For everyone who is interested in experimenting with scholarly books, as part of the Experimental Publishing work package of the Community-led Open Infrastructures for Monographs (COPIM) project we developed an Experimental Publishing Compendium (currently available in a beta version) that offers a selection of tools, books and practices for experimental publishing, while our Book Contain Multitudes Report available on COPIM’s website expands further on the potential of experimental scholarly books.
The challenge of preserving complex and experimental monographs
DT: Earlier, you mentioned that experimental monographs are much more challenging to preserve than the simple PDFs that we are more used to. Can you elaborate on this, and perhaps talk about what considerations are being made by the publishers of these works?
RK and MB: It is true that complex and experimental monographs are more challenging to archive and preserve than their traditional counterparts. As mentioned, the basic package sent to digital preservation archives (for long-term preservation) or archived in institutional repositories is usually simply the PDF and the metadata. For works that conform to the format of a traditional book (static text, maybe some images), this works fine. But where there are multiple files, auto-visual content, or external dependencies, things do get a lot more complicated. There are questions around what preservation means for the iterative work, where changes are made over time: at what point is the work to be preserved? How many times should the changes be recorded in additional preserved versions? What changes are significant, and how is this decided?
As more research outputs are completed collaboratively, in areas where the iterative nature of the research is part of the research methods or processes (and subsequently important to record and preserve), these questions will grow in importance. We have addressed some of this discussion in Preserving Combinatorial Books, a blog post published at the end of the COPIM Project.
A key factor in this discussion is efficient versioning criteria, which is also discussed in the Guidelines for Preserving New Forms of Scholarship published by the Mellon-funded Enhancing Services to Preserve New Forms of Scholarship project at NYU Libraries. This project, and its successor project, Embedding Preservability, examine many of the specific concerns about file types and formats, publishing platforms, embedded content, and external dependencies, which are significant concerns for experimental monographs. The guidelines are a useful source for publishers and those concerned with preservation alike.
In terms of the publishers, for many, the concept of long-term preservation may actually be a fairly new one, and not a consideration that had previously been a part of creating new published works. As the possibilities of digital innovation speed ahead with a multiplicity of new technologies, digital preservation struggles behind, mainly because processes need to be established for works to be preserved. Publishers of experimental digital monographs need to be aware that the more nonstandard and unusual their process, the more at-risk the work may become in the future in terms of preservation.
However, the way forward is being considered by multiple scholars of digital monograph preservation, within Open Book Futures and Embedding Preservability, and beyond. In the meantime, publishers can take time to consider what is important to preserve and what may not be, and begin to build this into their decision-making when creating new works.
More on archiving and preserving complex and experimental digital monographs can be found in Chapter 6 of COPIM Work Package 7’s publication, Good, Better, Best: Practices in Archiving and Preserving Open Access Monographs.
Read more in Part Three of this series, where we turn to digital editions and what makes them ‘preservable by design’, and look at what UCC Library is doing in the area of digital preservation of scholarly content…