Y2K

30 December 2024
Digital Dark Age logo, which depicts a human figure holding a large floppy disk, with to its left the words Digital Dark Age Crew. The background shows the screen of an oscilloscope.
Digital Dark Age Crew logo.

One of the most elusive items in the Digital Dark Age Crew back catalogue is “Y2K”, which deals with the Year 2000 problem. Originally planned as a December 1999 release, the track was never finished due to a succession of technical problems. Some early demos of “Y2K” have surfaced as bootlegs, and many fans of the group rate these amongst the most sought-after Digital Dark Age Crew tracks.


PDF Quality assessment for digitisation batches with Python, PyMuPDF and Pillow

13 December 2024
Photo of the interior of the control room in a fossil fuel power plant. The walls at the back are completely covered with large control panels. In front of it an operator sits at a desk in his chair.
Control room in Fossil fuel power plant in Point Tupper, Nova Scotia. Achim Hering, Public domain, via Wikimedia Commons.

This post introduces Pdfquad, a software tool that for automated quality assessment for large digitisation batches. The software was developed specifically for the Digital Library for Dutch Literature (DBNL), but it might be adaptable to other users and organisations as well.


Escape from the phantom of the PDF

14 November 2024
Aquatint showing a graveyard scene in front of a church. At the center is a man in armour with a distressed expression on his face. To his left is a skeleton, and to his right a ghost.
A man in armour is confronted by a ghost and a skeleton. Aquatint. Wellcome Collection, Public Domain.

In a recent blog post, colleagues at the National Digital Preservation Services in Finland addressed an issue with PDF files that contain strings with octal escape sequences. These are not parsed correctly by JHOVE, and the resulting parse errors ultimately lead to (seemingly unrelated) validation errors. The authors argue that octal escape sequences present a preservation risk, as they may confuse other software besides JHOVE. Since this claim is not backed up by any evidence, here I put this to the test using 8 different PDF processing tools and libraries.


JPEG quality estimation using simple least squares matching of quantization tables

30 October 2024
Photograph of faded sign on building front showing the word 'Quality'.
Adapted from Quality Coal by Greenville Daily Photo. Used under CC0 1.0. license.

In my previous post I addressed several problems I ran into when I tried to estimate the “last saved” quality level of JPEG images. It described some experiments based on ImageMagick’s quality heuristic, which led to a Python implementation of a modified version of the heuristic that improves the behaviour for images with a quality of 50% or less.

I still wasn’t entirely happy with this solution. This was partially because ImageMagick’s heuristic uses aggregated coefficients of the image’s quantization tables, which makes it potentially vulnerable to collisions. Another concern was, that the reasoning behind certain details of ImageMagick’s heuristic seems rather opaque (at least to me!).

In this post I explore a different approach to JPEG quality estimation, which is based on a straightforward comparison with “standard” JPEG quantization tables using least squares matching. I also propose a measure that characterizes how similar an image’s quantization tables are to its closest “standard” tables. This could be useful as a measure of confidence in the quality estimate. I present some tests where I compare the results of the least squares matching method with those of the ImageMagick heuristics. I also discuss the results of a simple sensitivity analysis.


JPEG quality estimation: experiments with a modified ImageMagick heuristic

23 October 2024
Photograph of golden retriever dog Bailey sitting at a desk in front of a laptop, bashing her paws away at the laptop's keyboard while wearing a necktie.
Bailey AKA the "I have no idea what I'm doing" dog. License unknown.

In this post I explore some of the challenges I ran into while trying to estimate the quality level of JPEG images. By quality level I mean the percentage (1-100) that expresses the lossiness that was applied by the encoder at the last “save” operation. Here, a value of 1 results in very aggressive compression with a lot of information loss (and thus a very low quality), whereas at 100 almost no information loss occurs at all1.

More specifically, I focus on problems with ImageMagick’s JPEG quality heuristic, which become particularly apparent when applied to low quality images. I also propose a simple tentative solution, that applies some small changes to ImageMagick’s heuristic.



Search

Tags

Archive

2024

December

November

October

March

2023

June

May

March

February

January

2022

November

June

April

March

2021

September

February

2020

September

June

April

March

February

2019

September

April

March

January

2018

July

April

2017

July

June

April

January

2016

December

April

March

2015

December

November

October

July

April

March

January

2014

December

November

October

September

August

January

2013

October

September

August

July

May

April

January

2012

December

September

August

July

June

April

January

2011

December

September

July

June

2010

December

Feeds

RSS

ATOM