Feeds:
Posts
Comments

Posts Tagged ‘RunCoCo’

  • Digital Impacts: How to Measure and Understand the Usage and Impact of Digital Content, Oxford Internet Institute/JISC, Oxford, 20th May 2011 (#oiiimpacts)
  • Beyond Collections: Crowdsourcing for public engagement, RunCoCo Conference, Oxford, 26th May 2011 (#beyond2011)
  • Professor Sherry Turkle, Alone Together RSA Lecture, RSA, London, 1st June 2011 (#rsaonline)

I’m getting a bit behind with blog postings (again), so here, in the interests of ticking another thing off my to-do list, are a few highlights from various events I’ve attended recently…

It was good to see a couple of fellow archivists at the showcase conference for JISC’s Impact and Embedding of Digitised Resources programme. As searchroom visitor figures continue to fall, it is more important than ever that archivists understand how to measure and demonstrate the usage and impact of their online resources. The number of unique visitor’s to the archive service’s website (currently the only metric available in the CIPFA questionnaire for Archive Services, for instance) is no longer (if it ever was) adequate as a measure of online usage.  As Dr Eric Meyer pointed out in his introduction, one of the central lessons arising from the development of the Toolkit for the Impact of Digitised Scholarly Resources has been that no single metric will ever tell the whole story – a range of qualitative and quantitative methods are necessary to provide a full picture.  The word ‘scholarly’ in the toolkit’s name may be rather off-putting to some archivists working in local government repositories. That would be a shame, because this free online resource is full of very practical and useful advice and guidance. Like the historians caracatured by Sharon Howard of the Old Bailey Online project, archivists are not good at “studying people who can answer back” – the professional archival literature is full of laments about how poor we are at user studies. The synthesis report from the Impact programme, Splashes and Ripples: Synthesizing the Evidence on the Impacts of Digital Resources, is recommended reading; detailed evaluation reports from each of the projects which took part in the programme are also available (at http://www.jisc.ac.uk/whatwedo/programmes/digitisation/impactembedding.aspx).  Many of the recommendations made by the report would be relatively straightforward to implement, yet could potentially transform archive services’ online presence – and the TIDSR toolkit contains the resources to help evaluate the change.  Simple suggestions like picking non-word acronymns to improve project visibility online (like TIDSR – at last I have understood the Internet’s curious aversion to vowels, flickr, lanyrd, tumblr and so on!) and providing simple, automatic citations that are easy to copy or download (although I rather fear that archives are missing the boat on this one). Jane Winters was also excellent on the subject of sustaining digital impact, an important subject for archives whose online resources are perhaps more likely than most to have a long shelf-life. Twitter coverage of the event is available on Summarizr (another one!).

One gap in the existing digital measurement landscape which occurred to me during the Impacts event was the need for metrics which take account, not just of the passive audience of digital resources, but of those who contribute to them and participate in a more active way.  The problem is easily illustrated by the difficulties encountered when using standard quantitative measurement tools with Web2.0 type sites.  Attempting to collate statistics on sites such as Your Archives or Transcribe Bentham through the likes of Google Scholar or Yahoo’s Site Explorer is handicapped by the very flexibility of a wiki site structure, compounded again, I suspect, by the want of a uniquely traceable identity.  Google Scholar, in particular, seems averse to searches on URLs (although curiously, I discovered that although a search for yourarchives.nationalarchives.gov.uk produces 0 hits, yourarchives.nationalarchives.gov.* comes back with 26), whilst sites which invite user contributions are perhaps particularly susceptible to false-positive site inlink hits where they are highlighted as a general resource in blogrolls and the like.

This need to be clearer about what we mean by user engagement and how to measure when we’ve successfully achieved it was also my main take-away from the following week’s RunCoCo Conference – Beyond Collections: Crowdsourcing for Public Engagement.  Like Arfon Smith of the Zooniverse team, I am not very comfortable with the term ‘crowdsourcing’, and indeed many of the projects showcased at the Beyond conference seemed to me to be more technologically-enhanced outreach events or volunteer projects than true attempts to engage the ‘crowd’ (not that there is anything wrong with traditional approaches, but I just don’t think they’re crowdsourcing).  Even where large numbers of people are involved, are they truly ‘engaged’ by receiving a rubber stamp (in the case of the Erster Weltkrieg in Alltagsdokumenten project) to mark their attendance at an open day type event?  Understanding the social dynamics behind even large scale online collaborations is important – the Zooniverse ethical contract bears repeating:

  1. Contributors are collaborators, not users
  2. Contributors are motivated and engaged by real research
  3. Don’t waste people’s time

Podcasts of all the Beyond presentations and a series of short, reflective blog posts on the day’s proceedings are available.

Finally, Professor Sherry Turkle‘s RSA lecture to celebrate the launch of her new book, Alone Together, about the social impact of the Internet, was rather too brief to give more than a glimpse of her current thinking on our technology saturated society, but nevertheless there were some intriguing ideas which have potentially wide-ranging implications for the future of archives. One was the sense that the Internet is not currently serving our human needs.  She also spoke about the tensions between being willing to share and privacy.  Turkle asked what is democracy and what is intimacy without privacy? In response to questions from the audience, Turkle also claimed that people don’t like to say negative things online because it leaves a trace of things that went wrong. If that is true, it might have important implications for what we can expect people to contribute in archival contexts, and the nature of the debate which might take place in contested spaces of memory. Audio of the event is available from the RSA website.

Advertisements

Read Full Post »

In conversation with the very excellent RunCoCo project at Oxford University last Friday, I revisited a question which will, I think, prove central to my current research – establishing trust in an online archival environment.  This is an important issue both for community archives, such as Oxford’s Great War Archive, as well as for conventional Archive Services which are taking steps to open up their data to user input in some way – whether this be (for example) by enabling user comments on the catalogue, or establishing a wiki, or perhaps making digitised images available on flickr.

A simple, practical scenario to surface some of the issues:

An image posted to flickr with minimal description.  Two flickr users, one clearly a member of staff at the Archives concerned, have posted suggested identifications.  Since they both in fact offer the same name (“Britannia Mill”), it is not immediately clear whether they both refer to the same location, or whether the second comment contradicts the first.

Which comment (if either) correctly identifies the image?  Would you be inclined to trust an identification from a member of staff more readily than you’d accept “Arkwright”‘s comment?  If so, why? Clicking on “Arkwright”‘s profile, we learn that he is a pensioner who lives locally.  Does this alter your view of the relative trustworthiness of the two comments (for all we know, the member of staff might have moved into the area just last week)? How could you test the veracity of the comments?  Whose responsibility is this? If you feel it’s the responsibility of the Archive Service in question, what resources might be available for this work? If you worked for the Archive Service, would you feel happy to incorporate information derived from these comments into the organisation’s finding aids?  Bear in mind that any would-be user searching for images of “Britannia Mills” – wherever the location – would not find this image using the organisation’s standard online catalogue: is potentially unreliable information better than no information at all? What would you consider an ‘acceptable’ quality and/or quantity level for catalogue metadata for public presentation? You might think this photograph should never have been uploaded to flickr in its current state – but even this meagre level of description has been sufficient to start an interesting – potentially useful? – discussion.  Just as a relatively poor quality scan has been ‘good enough’ to enable public access outside of the repository, although it would certainly not suffice for print publication, for example.

Such ambivalence and uncertainty about accepting user contributions is one reason that The National Archives wiki Your Archives was initially designed “to be ‘complementary’ to the organisation’s existing material” rather than fully integrated into TNA’s website.

In our discussion on Friday, we identified four ways in which online archives might try to establish trust in user contributions:

  • User Profiles: enabling users to provide background information on their expertise.  The Polar Bear Expedition Archives at the University of Michigan have experimented with this approach for registered users of the site, with apparently ambiguous results.  Similar features are available on the Your Archives wiki, although similarly, few users appear to use them, except for staff of TNA.  Surfacing the organisational allegiance of staff is of course important, but would not inherently make their comments more trustworthy (as discussed above), unless more in-depth information about their qualifications and areas of expert knowledge is also provided.  A related debate about whether or not to allow anonymous comments, and the reliability of online anonymous contributions, extends well beyond the archival domain.
  • Shifting the burden of proof to the user: offering to make corrections to organisational finding aids upon receipt of appropriate documentation.  This is another technique pioneered on the Polar Bear Expedition Archives site, but might become burdonsome given a particularly active user community.
  • Providing user statistics and/or manipulating the presentation of user contributions on the basis of user statistics: i.e. giving more weight to contributions from users whose previous comments have proved to be reliable.  Such techniques are used on Wikipedia (users can earn enhanced editing rights by gaining the trust of other editors), and user information is available from Your Archives, although somewhat cumbersome to extract – in its current form, I think it is unlikely anybody would use this information to form reliability judgements.  This technique is sometimes also combined with…
  • Rating systems: these can be either organisation-defined ratings (as, for instance, the Brooklyn Museum Collection Online – I do not know of an archives example) or user-defined (the familiar Amazon or e-Bay ranking system -but, again, I can’t think of an instance where such a system has been implemented in an archives context, although often talked about – can you?). Flickr implements a similar principle, whereby registered users can ‘favourite’ images.

A quick scan of Google Scholar reveals much research into establishing trust in the online marketplace, and of trust-building in the digital environment as a customer relationship management activity.  But are these commercial models necessarily directly applicable to information exchange in the archives environment, where the issue at stake is not so much the customer’s trust in the organisation or project concerned (although this clearly has an impact on other forms of trust) so much as the veracity and reliability of the historical information presented?

Do you have any other suggestions for techniques which could be (or are) used to establish trust in online archives, or further good examples of the four techniques outlined in archival practice?  It strikes me that all four options above rely heavily upon human interpretation and judgement calls, therefore scalability will become an issue with very large datasets (particularly those held outside of an organisational website) which the Archives may want to manipulate machine-to-machine (see this recent blog post and comments from the Brooklyn Museum).

Read Full Post »