Feeds:
Posts
Comments

Archive for April, 2011

This post is a thank you to my followers on Twitter, for pointing me towards many of the examples given below.  The thoughts on automated description and transcription are a preliminary sketching out of ideas (which, I suppose, is a way of excusing myself if I am not coherent!), on which I would particularly welcome comments or further suggestions:

A week or so before Easter, I was reading a paper about the classification of galaxies on the astronomical crowdsourcing website, Galaxy Zoo.  The authors use a statistical (Bayesian) analysis to distil an accurate sample of data, and then compare the reliability of this crowdsourced sample to classifications produced by expert astronomers.  The article also refers to the use of sample data in training artificial neural networks in order to automate the galaxy classification process.

This set me thinking about archivists’ approaches to online user participation and the harnessing of computing power to solve problems in archival description.  On the whole, I would say that archivists (and our partners on ‘digital archives’ kinds of projects) have been rather hamstrung by a restrictive ‘human-scale’, qualitatively-evaluated, vision of what might be achievable through the application of computing technology to such issues.

True, the notion of an Archival Commons evokes a network-oriented archival environment.  But although the proponents of this concept recognise “that the volume of records simply does not allow for extensive contextualization by archivists to the extent that has been practiced in the past”, the types of ‘functionalities’ envisaged to comprise this interactive descriptive framework still mirror conventional techniques of description in that they rely upon the human ability to interpret context and content in order to make contributions imbued with “cultural meaning”.  There are occasional hints of the potential for more extensible (?web scale) methods of description, in the contexts of tagging and of information visualization, but these seem to be conceived more as opportunities for “mining the communal provenance” of aggregated metadata – so creating additional folksonomic structures alongside traditional finding aids.  Which is not to say that the Archival Commons is not still justified from a cultural or societal perspective, but that the “volume of records” cataloguing backlog issue will require a solution which moves beyond merely adding to the pool of potential participants enabled to contribute narrative descriptive content and establish contextual linkages.

Meanwhile, double-keying, checking and data standardisation procedures in family history indexing have come a long way since the debacle over the 1901 census transcription. But double-keying for a commercial partner also signals a doubling of transcription costs, possibly without a corresponding increase in transcription accuracy.  Or, as the Galaxy Zoo article puts it, “the overall agreement between users does not necessarily mean improvement as people can agree on a wrong classification”.  Nevertheless, these norms from the commercial world have somehow transferred themselves as the ‘gold standard’ into archival crowdsourcing transcription projects, in spite of the proofreading overhead (bounded by the capacity of the individual, again).  As far as I am aware, Old Weather (which is, of course, a Zooniverse cousin of Galaxy Zoo) is the only project working with archival content which has implemented a quantitative approach to assess transcription accuracy – improving the project’s completion rate in the process, since the decision could be taken to reduce the number of independent transcriptions required from five to three.

Pondering these and other such tangles, I began to wonder whether there have indeed been any genuine attempts to harness large-scale processing power for archival description or transcription.  Tools are now available commercially designed to decipher modern handwriting (two examples: MyScript for LiveScribe; Evernote‘s text recognition tool), why not an automated palaeographical tool?  Vaguely remembering that The National Archives had once been experimenting with text mining for both cataloguing and sensitivity classification [I do not know what happened to this project – can anyone shed some light on this?], and recollecting the determination of one customer at West Yorkshire Archive Service who tried (and failed) valiantly to teach his Optical Character Recognition (OCR) software to recognise nearly four centuries of clerk’s handwriting in the West Riding Registry of Deeds indexes, I put out a tentative plea on Twitter for further examples of archival automation.  The following examples are the pick of the amazing set of responses I received:

  • The Muninn Project aims to extract and classify written data about the First World War from digitized documents using raw computing power alone.  The project appears to be at an early stage, and is beginning with structured documents (those written onto pre-printed forms) but hopes to move into more challenging territory with semi-structured formats at a later stage.
  • The Dutch Monk Project (not to be confused with the American project of the same name, which facilitates text mining in full-text digital library collections!) seeks to make use of the qualitative interventions of participants playing an online transcription correction game in order to train OCR software for improved handwriting recognition rates in future.  The project tries to stimulate user participation through competition and rewards, following the example of Google Image Labeller.  If your Dutch is good, Christian van der Ven’s blog has an interesting critique of this project (Google’s attempt at translation into English is a bit iffy, but you can still get the gist).
  • Impact is a European funded project which takes a similar approach to the Monk project, but has focused upon improving automated text recognition with early printed books.  The project has produced numerous tools to improve both OCR image recognition and lexical information retrieval, and a web-based collaborative correction platform for accuracy verification by volunteers.  The input from these volunteers can then in turn be used to further refine the automated character recognition (see the videos on the project’s YouTube channel for some useful introductory materials).  Presumably these techniques could be further adapted to help with handwriting recognition, perhaps beginning with the more stylised court hands, such as Chancery hand.  The division of the quality control checks into separate character, word, and page length tasks (as illustrated in this video) is especially interesting, although I think I’d want to take this further and partition the labour on each of the different tasks as well, rather than expecting one individual to work sequentially through each step.  Thinking of myself as a potential volunteer checker, I think I’d be likely to get bored and give up at the letter-checking stage.  Perhaps this rather more mundane task would be more effectively offered in return for peppercorn payment as a ‘human intelligence task’ on a platform such as Amazon Mechanical Turk, whilst the volunteer time could be more effectively utilised on the more interesting word and page level checking.
  • Genealogists are always ahead of the game!  The Family History Technology Workshop held annually at Brigham Young University usually includes at least one session on handwriting recognition and/or data extraction from digitized documents.  I’ve yet to explore these papers in detail, but there looks to be masses to read up on here.
  • Wot no catalogue? Google-style text search within historic manuscripts?  The Center for Intelligent Information Retrieval (University of Massachusetts Amherst) handwriting retrieval demonstration systems – manuscript document retrieval on the fly.
  • Several other tools and projects which might be of interest are listed in this handy google doc on Transcribing Handwritten Documents put together by attendees at the DHapi workshop held at the Maryland Institute for Technology in the Humanities earlier this year.  Where I’ve not mentioned specific examples directly here its mostly because these are examples of online user transcription interfaces (which for the purposes of this post I’m classing as technology-enhanced projects, as opposed to technology-driven, which is my main focus here – if that makes sense? Monk and Impact creep in above because they combine both approaches).

If you know of other examples, please leave a comment…

Read Full Post »

Digital Connections: new methodologies for British history, 1500-1900

I spent an enjoyable afternoon yesterday (a distinct contrast, I might add, to the rest of my day, but that is another story) at the Digital Connections workshop at the Institute of Historical Research in London, which introduced two new resources for historical research: the federated search facility, Connected Histories, and the Mapping Crime project to link crime-related documents in the John Johnson collection of ephemera at the Bodleian Library in Oxford to related external resources.

After a welcome from Jane Winters, Tim Hitchcock kicked off proceedings with an enthusiastic endorsement of Connected Histories and generally of all things digital and history-related in Towards a history lab for the digital past. I guess I fundamentally disagree with the suggestion that concepts of intellectual property might survive unchallenged in some quarters (in fact I think the idea is contradicted by Tim’s comments on the Enlightenment inheritance and the ‘authorship’ silo). But then again, we won’t challenge the paywall by shunning it altogether, and in that sense, Connected Histories’ ‘bridges’ to the commercial digitisation providers are an important step forward.  It will be interesting to see how business models evolve in response – there were indications yesterday that some providers may be considering moves towards offering short-term access passes, like the British Newspapers 1800-1900 at the British Library, where you can purchase a 24 hour or 7 day pass if you do not have an institutional affiliation.  Given the number of north American accents in evidence yesterday afternoon, too, there will be some pressure on online publishers to open up access to their resources to overseas users and beyond UK Higher Education institutions.

For me, the most exciting parts of the talk, and ensuing demonstration-workshop led by Bob Shoemaker, related to the Connected Histories API (which seems to be a little bit of a work-in-progress), which led to an interesting discussion about the technical skills required for contemporary historical research; and the eponymous ‘Connections‘, a facility for saving, annotating and (if desired) publicly sharing Connected Histories search results. The reception in the room was overwhelmingly positive – I’ll be fascinated to see if Connected Histories can succeed where other tools have failed to get academic historians to become more sociable about their research and expertise.  Connected Histories is not, in fact, truly a federated search platform, in that indexes for each participating resource have been re-created by the Connected Histories team, which then link back to the original source.  With the API, this will really open up access to many resources which were designed for human interrogation only, and I am particularly pleased that several commercial providers have been persuaded to sign up to this model.  It does, though, seem to add to the complexity of keeping Connected Histories itself up-to-date: there are plans to crawl contributing websites every 6 months to detect changes required.  This seems to me quite labour intensive, and I wonder how sustainable it will prove to be, particularly as the project team plan to add yet more resources to the site in the coming months and welcome enquiries from potential content providers (with an interesting charging model to cover the costs of including new material).  This September’s updates are planned to include DocumentsOnline from The National Archives, and there were calls from the audience yesterday to include catalogue data from local archives and museums.

Without wishing to come over as dismissive as this possibly sounds, David Tomkins’ talk about the Mapping Crime project was a pretty good illustration of what can be done when you have a generous JISC grant and a very small collection.  Coming from (well, my working background at least) a world of extremely large, poorly documented collections, where no JISC-equivalent funder is available, I was more interested in the generic tools provided for users in the John Johnson collection: permanent URIs for each item, citation download facilities, a personal, hosted user space within the resource, and even a scalable measuring tool for digitised documents.  I wonder why it is taking archival management software developers so long to get round to providing these kinds of tools for users of online archive catalogues? There was also a fascinating expose of broadsheet plagiarism revealed by the digitisation and linking of two sensationalist crime reports which were identical in all details – apart from the dates of publication and the names of those involved.  A wonderful case study in archival authenticity.

David Thomas’ keynote address was an entertaining journey through 13 years of online digitisation effort, via the rather more serious issues of sustainability and democratization of our digital heritage.  His conclusions, that the future of history is about machine-to-machine communication, GIS and spatial data especially, might have come as a surprise to the customary occupants of the IHR’s Common Room, but did come with a warning of the problems attached to the digital revolution from the point of view of ordinary citizens and users: the ‘google issue’ of search results presented out of context; the maze of often complex and difficult-to-interpret online resources; and the question of whether researchers have the technical skills to fully exploit this data in new ways.

Read Full Post »