It’s been a while since the last release of the Isolyzer tool, but after four years of near-inactivity I just published Isolyzer 1.4. In this post I provide some background information on how this release came about, and I briefly explain the main changes.
Intensive Breeding by Jean Marc Cote, Public domain, via Wikimedia Commons.
At the KB we’ve been using JP2 (JPEG 2000 Part 1) as our primary image format for digitised newspapers, books and periodicals since 2007. The digitisation work is contracted out to external vendors, who supply the digitised pages as losslessly compressed preservation masters, as well as lossily compressed access images that are used within the Delpher platform.
Right now the KB is in the process of migrating its digital collections to a new preservation system. This prompted the question whether it would be feasible to generate access JP2s from the preservation masters in-house at some point in the future, using software that runs inside the preservation system1. As a first step towards answering that question, I created some simple proof of concept workflows, using three different JPEG 2000 codecs. I then tested these workflows with preservation master images from our collection. The main objective of this work was to find a workflow that both meets our current digitisation requirements, and is also sufficiently performant.
Earlier this month saw the publication of The Significant Properties of Spreadsheets. This is the final report of a six-year research effort by the Open Preservation Foundation’s Archives Interest Group (AIG), which is composed of participants from the National Archives of the Netherlands (NANETH), the National Archives of Estonia (NAE), the Danish National Archives (DNA), and Preservica. The report caught my attention for two reasons. First, there’s the subject matter of spreadsheets, on which I’ve written a few posts in the past1. Second, it marks a surprising (at least to me!) return of “significant properties”, a concept that was omnipresent in the digital preservation world between, roughly, 2005 and 2010, but which has largely fallen into disuse since then. In this post I’m sharing some of my thoughts on the report.
Over the years, I’ve been using a variety of open-source software tools for solving all sorts of issues with PDF documents. This post is an attempt to (finally) bring together my go-to PDF analysis and processing tools and commands for a variety of common tasks in one single place. It is largely based on a multitude of scattered lists, cheat-sheets and working notes that I made earlier. Starting with a brief overview of some general-purpose PDF toolkits, I then move on to a discussion of the following specific tasks:
Validation and integrity testing
PDF/A and PDF/UA compliance testing
Document information and metadata extraction
Policy/profile compliance testing
Text extraction
Link extraction
Image extraction
Conversion to other (graphics) formats
Inspection of embedded image information
Conversion of multiple images to PDF
Cross-comparison of two PDFs
Corrupted PDF repair
File size reduction of PDF with hi-res graphics
Inspection of low-level PDF structure
View, search and extract low-level PDF objects
Incremental updates and document versions: get information about the number of incremental updates, and restore previous versions
My previous post addressed the emulation of mobile Android apps. In this follow-up, I’ll explore some other aspects of mobile app preservation, with a focus on acquisition and ingest processes. The 2019 iPres paper on the Acquisition and Preservation of Mobile eBook Apps by Maureen Pennock, Peter May and Michael Day again was the departure point. In its concluding section, they recommend:
In terms of target formats for acquisition, we reach the undeniable conclusion that acquisition of the app in its packaged form (either an IPA file or an APK file) is optimal for ensuring organisations at least acquire a complete published object for preservation.
And:
[T]his form should at least also include sufficient metadata about inherent technical dependencies to understand what is needed to meet them.
In practical terms, this means that the workflows that are used for acquisition and (pre-)ingest must include components that are able to deal with the following aspects:
Acquisition of the app packages (either by direct deposit from the publisher, or using the app store).
Identification of the package format (APK for Android, IPA for iOS).
Identification of metadata about the app’s technical dependencies.
The main objective of this post is to get an idea of what would be needed to implement these components. Is it possible to do all of this with existing tools? If not so, what are the gaps? The underlying assumption here is an emulation-based preservation strategy1.