Digitising ‘Common Sense’ (pt.II)

Since writing about my intention to digitise the journal, Common Sense, I’ve received support from former editors, Richard Gunn, Werner Bonefeld, Adrian Wilding and Brian McGrail, who between them have sent me the entire run of 24 issues. Using our library’s book scanning facilities, I’ve managed to scan all issue of the journal much quicker than I originally anticipated.

24 issues with around 2100 pages, 200 articles and 104 authors, over 12 years.

The format of the journal changed twice during the course of its life. Issues 1-9 were photocopies of original typed articles that contributors would send to the editors. The first three issues were stapled along the edge of A4 sheets and proved difficult at times to scan because this method of binding did not leave very much margin when pressing the page flat against the scanner bed. Issues 4-9 were easier because they were stapled in the middle of an A3 sheet and would open nicely for lying flat on the scanner. Issues 7-9 were especially easy because contributors seemed to consistently take notice of the editors’ request to leave wide margins.

Notes for contributors: send articles in clean and reproducible typescript, single-space or space-and-a-half (not double-space). Leave wide margins on both sides, and wide gaps at top and bottom of each page.

Issues 10-24 were published in a more conventional journal format and this left enough room at the margins to achieve a consistently good scan and a single issue could be scanned in about 30 mins, half the time that issues 1-3 took.

The journal was scanned at 300dpi using a Plustek Optibook 3600 scanner to create bitmap files of each page. I then used Adobe Acrobat 7 to OCR and create PDFs. This provided pages that are print quality should you wish to print them out, as well as being fully searchable. I regularly cropped pages from earlier issues with the problematic margins using Acrobat to leave a relatively clean page, although at times, you’ll see that there’s barely any margin at all. Without taking the original issues apart, I don’t think I could have done much better.

I’ve also created a website for the journal, hosted here on the University of Lincoln’s blogging platform, with a mapped domain of http://commonsensejournal.org.uk that costs ¬£5/year. I’ve tried to make the journal easy to navigate and you can browse by issue, author and date of publication. You can also search the table of contents across the entire run of 24 issues. I’ve been playing with Google Custom Search, which should provide a way to search the full text of the journal from the website. This largely depends on when Google decides to index the PDFs1 though and so I won’t implement this until I know the full text for all issues is indexed.

The original paper copies of the journal will be deposited with either the National Library of Scotland or the British Library, depending on what they currently hold.

Finally, Mike Neary (who introduced me to the journal) and I, intend to write an article which retrospectively discusses the journal and hopefully provides a useful, critical introduction to new readers. Past editors and contributors have offered to help.

  1. Google allow you to force index URLs but this is no guarantee that it’ll happen quickly or consistently []

4 thoughts on “Digitising ‘Common Sense’ (pt.II)

  1. Absolutely awesome Joss. Thanks for all your hard work and for ruining the rest of my day productivity-wise.

    • Yes, tedious, but worthwhile and took no more than three days of my life to complete. Over time, I think a lot of people will benefit from it. I’d never held even a single issue of the journal before this project, so it was a way to become acquainted with older work by writers I admire.

Comments are closed.