Feeds:
Posts
Comments

Posts Tagged ‘digital archives’

It’s been a while since I’ve posted here purely on digital preservation issues: my work has moved in other directions, although I did attend a number of the digital preservation sessions at the Society of American Archivists’ conference this summer.  I retain a keen interest in digital preservation, however, particularly in developments which might be useful for smaller archives.  Recently, I’ve been engaged in a little work for a project called DiSARM (Digital Scenarios for Archives and Records Management), preparing some teaching materials for the Masters students at UCL to work from next term, and in revising the contents of a guest lecture I present to the University of Liverpool MARM students on ‘Digital Preservation for the Small Repository’.  Consequently, I’ve been trying to catch up on the last couple of years (since I left West Yorkshire Archive Service at the end of 2009) of new digital preservation projects and research.

So what’s new?  Well, from a small archives perspective, I think the key development has been the emergence of several digital curation workflow management systems – Archivematica, Curator’s Workbench, the National Archive of Australia’s Digital Preservation Software Platform (others…?) – which package together a number of different tools to guide the archivist through a sequenced set of stages for the processing of digital content.  The currently available systems vary in their approaches to preservation, comprehensiveness, and levels of maturity, but represent a major step forward from the situation just a couple of years ago.  In 2008, if (like me when WYAS took in the MLA Yorkshire archive as a testbed), you didn’t have much (or any) money available, your only option was – as one of the former Liverpool students memorably pointed out to me – to cobble together a set of tools as best you could from old socks and a bit of string.  Now we have several offerings approaching an integrated software solution; moreover, these packages are generally open source and freely available, so would-be adopters are able to download each one and play about with it before deciding which one might suit them best.

Having said that, I still think it is important that students (and practitioners, of course) understand the preservation strategies and assumptions underlying each software suite.  When we learn how to catalogue archives, we are not trained merely to use a particular software tool.  Rather, we are taught the principles of archival description, and then we move on to see how these concepts are implemented in practice in EAD or by using specific database applications, such as (in the U.K.) CALM or Adlib.  For DiSARM, students will design a workflow and attempt to process a small sample set of digital documents using their choice of one or more of the currently available preservation tools, which they will be expected to download and install themselves.  This Do-It-Yourself approach will mirror the practical reality in many small archives, where the (frequently lone) archivist often has little access to professional IT support. Similarly, students at UCL are not permitted to install software onto the university network.  Rather than see this as a barrier, again I prefer to treat this situation a reflection of organisational reality.  There are a number of very good reasons why you would not want to process digital archives directly onto your organisation’s internal network, and recycling re-purposing old computer equipment of varying technical specifications and capabilities to serve as workstations for ingest is a fact of life even, it seems, for Mellon-funded projects!

In preparation for writing this DiSARM task, I began to put together for my own reference a spreadsheet listing all the applications I could think of, or have heard referenced recently, which might be useful for preservation processing tasks in small archives.  I set out to record:

  • the version number of the latest (stable) release
  • the licence arrangements for each tool
  • the URL from which the software can be downloaded
  • basic system requirements (essentially the platform(s) on which the software can be run – we have surveyed the class and know there is a broad range of operating systems in use, including several flavours of both Linux and Windows, and Mac OS X)
  • location of further documentation for each application
  • end-user support availability (forums or mailing lists etc)
This all proved surprisingly difficult.  I was half expecting that user-friendly documentation and (especially) support might often be lacking in the smaller projects, but several websites also lack clear statements about system requirements or the legal conditions under which the software may be installed and used.  Does ‘educational use and research’ cover a local authority archives providing research services to the general public (including academics)?  Probably not, but it would presumably allow for use in a university archives.  Thanks to the wonders of interpreted programming languages (mostly Java, but Python also puts in an occasional appearance), many tools are effectively cross-platform, but it is astonishing how many projects fail clearly to say so.  This is self-evident to a developer, of course, but not at all obvious to an archivist, who will probably be worried about bringing coffee into the repository, let alone a reptile.  Oh, and if you expect your software to be compiled from code, or require sundry other faffing around at a command line before use, I’m sorry, but your application is not “easy to implement” for ordinary mortals, as more than one site claimed.  Is it really so hard to generate binary executables for common operating systems (or if you have a good excuse – such as Archivematica which is still in alpha development – at least provide detailed step-by-step instructions)?  Many projects of course make use of SourceForge to host code, but use another website for documentation and updates – it can be quite confusing finding your way around.  The veritable ClamAV seems to have undergone some kind of Windows conversion, and although I’m sure that Unix packages must be there somewhere, I’m damned if I could find them easily…

All of which plays into a wider debate about just how far the modern archivist’s digital skills ought to reach (there are many other versions of this debate, the one linked – from 2006 so now quite old – just happens to be one of the most comprehensive attempts to define a required digital skill set for information practitioners).  No doubt there will be readers of this post who believe that archivists shouldn’t be dabbling in this sort of stuff at all, especially if s/he also works for an organisation which lacks the resources to establish a reliable infrastructure for a trusted digital repository.  And certainly I’ve been wondering lately whether some kind of archivists’ equivalent of The Programming Historian would be welcome or useful, teaching basic coding tailored to common tasks that an archivist might need to carry out.  But essentially, I don’t subscribe to the view that all archivists need to re-train as computer scientists or IT professionals.  Of course, these skills are still needed (obviously!) within the digital preservation community, but to drive a car I don’t need to be a mechanic or have a deep understanding of transport infrastructure.  Digital preservation needs to open up spaces around the periphery of the community where newcomers can experiment and learn, otherwise it will become an increasingly closed and ultimately moribund endeavour.

Read Full Post »

8am on Saturday morning, and those hardy souls who have not yet fled to beat Hurricane Irene home or who are stranded in Chicago, plus other assorted insomniacs, were presented with a veritable smörgåsbord of digital preservation goodness.  The programme has many of the digital sessions scheduled at the same time, and today I decided not to session-hop but stick it out in one session in each of the morning’s two hour-long slots.

My first choice was session 502, Born-Digital archives in Collecting Repositories: Turning Challenges into Byte-Size Opportunities, primarily an end-of-project report on the AIMS Project.  It’s been great to see many such practical digital preservation sessions at this conference, although I do slightly wonder what it will take before working with born-digital truly becomes part of the professional mainstream.  Despite the efforts of all the speakers at sessions like this (and in the UK, colleagues at the Digital Preservation Roadshows with which I was involved, and more recent similar events), there still appears to be a significant mental barrier which stops many archivists from giving it a go.  As the session chair began her opening remarks this morning, a woman behind me remarked “I’m lost already”.

There may be some clues in the content of this morning’s presentations: in amongst my other work (as would be the case for most archivists, I guess) I try to keep reasonably up-to-date with recent developments in practical digital preservation.  For instance, I was already well aware of the AIMS Project, although I’d not had a previous opportunity to hear about their work in any detail, but here were yet more new suggested tools for digital preservation: I happen to know of FTK Imager, having used it with the MLA Yorkshire archive accession, although what wasn’t stated was that the full FTK forensics package is damn expensive and the free FTK Imager Lite (scroll down the page for links) is an adequate and more realistic proposition for many cash-strapped archives.  BagIt is familiar too, but Bagger, a graphical user interface to the BagIt Library is new since I last looked (I’ll add links later – the Library of Congress site is down for maintenance”).  Sleuthkit was mentioned at the research forum earlier this week, but fiwalk (“a program that processes a disk image using the SleuthKit library and outputs its results in Digital Forensics XML”) was another new one on me, and there was even talk in this session of hardware write-blockers.  All this variety is hugely confusing for anybody who has to fit digital preservation around another day job, not to mention potentially expensive when it comes to buying hardware and software, and the skills necessary to install and maintain such a jigsaw puzzle system.  As the project team outlined their wish list for yet another application, Hypathia, I couldn’t help wondering whether we can’t promote a little more convergence between all these different tools both digital preservation specific and more general.  For instance, the requirement for a graphical drag ‘n’ drop interface to help archivists create the intellectual arrangement of a digital collection and add metadata reminded me very much of recent work at Simmons College on a graphical tool to help teach archival arrangement and description (whose name I forget, but will add it when it comes back to me!*).  I was interested particularly in the ‘access’ part of this session, particularly the idea that FTK’s bookmark and label functions could be transformed into user generated content tools, to enable researchers to annotate and tag records, and in the use of network graphs as a visual finding aid for email collections.

The rabbit-caught-in-headlights issue seems less of an issue for archivists jumping on the Web2.0 bandwagon, which was the theme of session 605, Acquiring Organizational Records in a Social Media World: Documentation Strategies in the Facebook Era, where we heard about the use of social media, primarily facebook, to contact and document student activities and student societies in a number of university settings, and from a university archivist just beginning to dip her toe into Twitter.  As a strategy of working directly with student organisations and providing training to ‘student archivists’ was outlined, as a method of enabling the capturing of social media content (both simultaneously with upload and by web-crawling sites afterwards), I was reminded of my own presentation at this conference: surely here is another example of real-life community development? The archivist is deliberately ‘going out to where the community is’ and adapting to the community norms and schedules of the students themselves, rather than expecting the students themselves to comply with archival rules and expectations.

This afternoon I went to learn about SNAC: the social networks and archival context project (session 710), something I’ve been hearing other people mention for a long time now but knew little about.  SNAC is extracting names (corporate, personal, family) from Encoded Archival Description (EAD) finding aids as EAC-CPF and then matching these together and with pre-existing authority records to create a single archival authorities prototype.  The hope is to then extend this authorities cooperative both nationally and potentially internationally.

My sincere thanks to the Society of American Archivists for their hospitality during the conference, and once again to those who generously funded my trip – the Archives and Records Association, University College London Graduate Conference Fund, UCL Faculty of Arts and UCL Department of Information Studies.

* UPDATE: the name of the Simmons’ archival arrangement platform is Archivopteryx (not to be confused with the Internet mail server Archiveopteryx which has an additional ‘e’ in the name)

Read Full Post »

Friday had a bit of a digital theme for me, beginning with a packed, standing-room-only session 302, Practical Approaches to Born-Digital Records: What Works Today. After a witty introduction by Chris Prom about his Fulbright research in Dundee, a series of speakers introduced their digital preservation work, with a real emphasis on ‘you too can do this’.  I learnt about a few new tools: firefly, a tool which is used to scan for American social security numbers and other sensitive information – not much use in a British context, I imagine, but an interesting approach all the same; TreeSize Professional, a graphical hard disk analyser; and several projects were making use of the Duke Data Accessioner, a tool with which I was already familiar but have never used.  During the morning session, I also popped in and out of ‘team-Brit’ session 304 Archives in the Web of Data which discussed developments in the UK and US in opening up and linking together archival descriptive data, and session 301 Archives on the Go: Using Mobile Technologies for Your Collections, where I caught a presentation on the use of FourSquare at Stanford University.

In the afternoon, I mostly concentrated on session 401, Re-arranging Arrangement and Description, with a brief foray into session 407, Faces of Diversity: Diasporic Archives and Archivists in the New Millennium.  Unless I missed this whilst I was out at the other session, nobody in session 410 mentioned the series system as a possible alternative or resolution to some of the tensions identified in a strict application of hierarchically-interpreted original order, which surprised me.  There were some hints towards a need for a more object-oriented view of description in a digital environment, and of methods of addressing the complexity of having multiple representations (physical, digital etc.), but I have been reading my UCL colleague Jenny Bunn’s recently completed PhD thesis, Multiple Narratives, Multiple Views: Observing Archival Description on flights for this trip, which would have added another layer to the discussion in this session.

And continuing the digital theme, I was handed a flyer for an event coming later this year (on 6th October): Day of Digital Archives which might interest some UK colleagues.  This is

…an initiative to raise awareness of digital archives among both users and managers. On this day, archivists, digital humanists, programmers, or anyone else creating, using, or managing digital archives are asked to devote some of their social media output (i.e. tweets, blog posts, youtube videos etc.) to describing their work with digital archives.  By collectively documenting what we do, we will be answering questions like: What are digital archives? Who uses them? How are they created and maanged? Why are they important?

 

Read Full Post »

A round-up of a few pieces of digital goodness to cheer up a damp and dark start to October:

What looks like a bumper new issue of the Journal of the Society of Archivists (shouldn’t it be getting a new name?) is published today.  It has an oral history theme, but actually it was the two articles that don’t fit the theme which caught my eye for this blog.  Firstly, Viv Cothey’s final report on the Digital Curation project, GAip and SCAT, at Gloucestershire Archives, with which I had a minor involvement as part of the steering group for the Sociey of Archivists’-funded part of the work.  The demonstration software developed by the project is now available for download via the project website.  Secondly, Candida Fenton’s dissertation research on the Use of Controlled Vocabulary and Thesauri in UK Online Finding Aids will be of  interest to my colleages in the UKAD network.  The issue also carries a review, by Alan Bell, of Philip Bantin’s book Understanding Data and Information Systems for Recordkeeping, which I’ve also found a helpful way in to some of the more technical electronic records issues.  If you do not have access via the authentication delights of Shibboleth, no doubt the paper copies will be plopping through ARA members’ letterboxes shortly.

Last night, by way of supporting the UCL home team (read: total failure to achieve self-imposed writing targets), I had my first go at transcribing a page of Jeremy Bentham’s scrawled notes on Transcribe Bentham.  I found it surprisingly difficult, even on the ‘easy’ pages!  Admittedly, my paleographical skills are probably a bit rusty, and Bentham’s handwriting and neatness leave a little to be desired – he seems to have been a man in a hurry – but what I found most tricky was not being able to glance at the page as a whole and get the gist of the sentence ahead at the same time as attempting to decipher particular words.  In particular, not being able to search down the whole page looking for similar letter shapes.  The navigation tools do allow you to pan and scroll, and zoom in and out, but when you’ve got the editing page up on the screen as well as the document, you’re a bit squished for space.  Perhaps it would be easier if I had a larger monitor.  Anyway, it struck me that this type of transcription task is definitely a challenge, for people who want to get their teeth into something, not the type of thing you might dip in and out of in a spare moment (like indicommons on iPhone and iPad, for instance).

I’m interested in reward and recognition systems at the moment, and how crowdsourcing projects seek to motivate participants to contribute.  Actually, it’s surprising how many projects seem not to think about this at all – the build it and wait for them to come attitude.  Quite often, it seems, the result is that ‘they’ don’t come, so it’s interesting to see Transcribe Bentham experiment with a number of tricks for monitoring progress and encouraging people to keep on transcribing.  So, there’s the Benthamometer for checking on overall progress, you can set up a watchlist to keep an eye on pages you’ve contributed to, individual registered contributors can set up a user profile to state their credentials, chat to fellow transcribers on the discussion forum, and there’s a points system, depending on how active you are on the site, and a leader board of top transcribers.  The leader board seems to be fueling a bit of healthy transatlantic competition right at the moment, but given the ‘expert’ wanting-to-crack-a-puzzle nature of the task here, I wonder whether the more social / community-building facilities might prove more effective over the longer term than the quantitative approaches.  One to watch.

Finally, anyone with the techie skills to mashup data ought to be welcoming The National Archives’ work on designing the Open Government Licence (OGL) for public sector information in the U.K.  I haven’t (got the technical skills) but I’m welcoming it anyway in case anyone who has hasn’t yet seen the publicity about it, and because I am keen to be associated with angels.

Read Full Post »

Since it seems a few people read my post about day one of ECDL2010, I guess I’d better continue with day two!

Liina Munari’s keynote about digital libraries from the European Commission’s perspective provided delegates with an early morning shower of acronymns.  Amongst the funder-speak, however, there were a number of proposals from the forthcoming FP7 Call 6 funding round which are interesting from an archives and records perspective, including projects investigating cloud storage and the preservation of context, and on appraisal and selection using the ‘wisdom of crowds’. Also, the ‘Digital Single Market’ will include work on copyright, specifically the orphan works problem, which promises to be useful to the archives sector – Liina pointed out that the total size of the European Public Domain is smaller than the US equivalent because of the extended period of copyright protection available to works whose current copyright owners are unknown. But I do wish people would not use the ‘black hole’ description; its alarmist and inaccurate.  If we combine this twentieth century black hole (digitised orphan works) with the oft-quoted born-digital black hole, it seems a wonder we have any cultural heritage left in Europe at all.

After the opening keynote, I attended the stream on the Social Web/Web 2.0, where we were treated to three excellent papers on privacy-aware folksonomies, seamless web editing, and the automatic classification of social tags. The seamless web editor, seaweed, is of interest to me in a personal capacity, because of its WordPress plugin, which would essentially enable the user to add new posts or edit existing ones directly into a web browser without recourse to the cumbersome WordPress dashboard, and absent mindedly adding new pages instead of new posts (which is what I generally manage to do by mistake). I’m sure there are archives applications too, possibly for instance in terms of the user interface design for encouraging participation in archival description.  Privacy-aware folksonomies, a system to enable greater user control over tagging (with levels user only, friends, and tag provider), might have application in respect of some of the more sensitive archive content, such as mental health records perhaps.  The paper on the automatic classification of social tags will be of particular interest to records managers interested in the searchability and re-usability of folksonomies in record-keeping systems, as well as to archivists implementing tagging systems into the online catalogue or digital archives interfaces.

After lunch we had a poster and demo session.  Those which particularly caught my attention included a poster from the University of Oregon entitled ‘Creating a Flexible Preservation Infrastructure for Electronic Records’ and described as the ‘do-it’ solution to digital preservation in a small repository without any money.  Sounded familiar!  The authors, digital library expert Karen Estlund and University Archivist Heather Briston, described how they have made best use of existing infrastructure, such as share drives (for deposit) and the software package Archivists Toolkit for description.  Their approach is similar to the workflow I put in place for West Yorkshire Archive Service, except that the University are fortunate to be in a position to train staff to carry out some self-appraisal before deposit, which simplifies the process.  I was also interested (as someone who is never really sure why tagging is useful) in a poster ‘Exploring the Influence of Tagging Motivation on Tagging Behaviour’ which classified taggers into two groups, describers and categorisers, and in the demonstration of the OCRopodium project at King’s College London, exploring the use of optical character recognition (OCR) with typescript texts.

In the final session of the day, I was assigned to the stream on search in digital libraries, where papers explored the impact of the search interface on search tasks, relevance judgements, and search interface design.

Then there was the conference dinner…

Read Full Post »

I had a day at the Society of Archivists’ Conference 2010 in Manchester last Thursday; rather a mixed bag. I wasn’t there in time for the first couple of papers, but caught the main strand on digital preservation after the coffee break. It’s really good to see digital preservation issues get such a prominent billing (especially as I understand there few sessions on digital preservation at the much larger Society of American Archivists’ Conference this year), although I was slightly disappointed that the papers were essentially show and tell rehearsals of how various organisations are tackling the digital challenge. I have given exactly this type of presentation at the Society’s Digital Preservation Roadshows and at various other beginners/introductory digital preservation events over the past year.  Sometimes of course this is precisely what is needed to get the nervous to engage with the practical realities of digital preservation, but all the same, it’s a pity that one or more of the papers at the main UK professional conference of the year did not develop the theme a little more and stimulate some discussion on the wider implications of digital archives.  However, it was interesting to see how the speakers assumed familiarity with OAIS and digital preservation concepts such as emulation. I suspect some of the audience were left rather bewildered by this, but the fact that speakers at an archives conference feel they can make such assumptions about audience understanding does at least suggest that some awareness of digital preservation theory and frameworks is at last crawling into the professional mainstream.

I was interested in Meena Gautam’s description of the National Archives of India‘s preparations for receiving digital content, which included a strategy for recruiting staff with relevant expertise. Given India’s riches in terms of qualified IT professionals, I would have expected a large pool of skilled people from which to recruit. But the direction of her talk seemed to suggest that, in actual fact, NAI is finding it difficult to attract the experts they require. [There was one particular comment – that the NAI considers conversion to microfilm to be the current best solution for preserving born-digital content – which seemed particularly extraordinary, although I have since discovered the website of the Indian National Digital Preservation Programme, which does suggest that the Indian Government is thinking beyond this analogue paradigm.]  Anyway, NAI are not alone in encountering difficulties in attracting technically skilled staff to work in the archives sector.  I assume that the reason for this is principally economic, in that people with IT qualifications can earn considerably more working in the private sector.

It was a shame that there was not an opportunity for questions at the end of the session, as I would have liked to ask Dr Gautam how archives could or should try to motivate computer scientists and technicians to work in the area of digital preservation.  Later in the same session, Sharon McMeekin from the Royal Commission on the Ancient and Historical Monuments of Scotland advocated that archives organisations should collaborate to build digital repositories, and I and several others amongst the Conference twitter audience agreed.  But from observation of the real archives world, I would suggest that, although most people agree in principle that collaboration is the way forward, there is very little evidence – as yet at least – of partnership in practice. I wonder just how likely it is that joint repositories will emerge in this era of recession and budget cuts (which might be when we need collaboration most, but when in reality most organisations’ operations become increasingly internally focused).  Since it seems archives are unable to compete in attracting skilled staff in the open market, and – for a variety of reasons – it seems that the establishment of joint digital repositories is hindered by traditional organisational boundaries, I pondered whether a potential solution to both issues might lie in Yochai Benkler‘s third organisational form of commons-based peer-production: as the means both to motivate a community of appropriately skilled experts to contribute their knowledge to the archives sector, and to build sustainable digital archives repositories in common.  There are already of course examples of open source development in the digital archives world (Archivematica is a good example, and many other tools, such as the National Archives of Australia’s Xena and The (UK) National Archives DROID are available under open source licences), since the use of open standards fits well with the preservation objective.  Could the archives profession build on these individual beginnings in order to stimulate or become the wider peer community needed to underpin sustainable digital preservation?

After lunch, we heard from Dr Elizabeth Shepherd and Dr Andrew Flinn on the work of the ICARUS research group at UCL’s Department of Information Studies, of which my user participation research is a small part.  It was good to see the the twitter discussion really pick up during the paper, and a good question and answer session afterwards.  Sarah Wickham has a good summary of this presentation.

Finally, at the end of the day, I helped out with the session to raise awareness of the UK Archives Discovery Network, and to gather input from the profession of how they would like UKAD to develop.  We asked for comments on post-it notes on a series of ‘impertinent questions‘.  I was particularly interested in the outcome of the question based upon UKAD’s Objective 4: In reality, there will always be backlogs of uncatalogued archives.” Are volunteers the answer?  From the responses we gathererd, there does appear to be increasing professional acceptance of the use of volunteers in description activities, although I suspect our use of the word ‘volunteer’ may be holding back appreciation of an important difference between the role of ‘expert’ volunteers in archives and user participation by the crowd.

Read Full Post »

A write-up of the second Archival Education Research Institute which I attended at  from 21st to 25th June.

The scheduled programme (or program, I suppose!) was a mixture of plenary sessions on the subject of interdisciplinarity in archival research, methods and mentoring workshops, curriculum discussion sessions, and research papers given by both doctoral students and faculty members.  We also experienced two fascinating and engaging, if slightly US-centric, theatrical performances by the University of Michigan’s Center for Research on Learning Theatre Program (ok, now I’m confused – why would it be ‘center’ but not ‘theater’?).

Most valuable to me personally were the methods workshops on Information Retrieval and User Studies.  IR research is largely new to me, although I was aware that current development work at The National Archives [TNA] includes a research strand being carried out at the University of Sheffield’s Information Studies Department which uses IR techniques to investigate information-seeking behaviour across TNA’s web domain and catalogue knowledge base.  I was interested to see whether these methods could be adapted for my research interests in user participation.  User Studies turned out to be more familiar territory, not least because of many years’ responsibility coordinating and analysing the Public Services Quality Group[PSQG] Survey of Visitors to UK Archives across the West Yorkshire Archive Service‘s five offices.  I hadn’t previously appreciated that the PSQG survey is unique in the archival world in providing over a decade’s worth of longitudinal data on UK archive users (despite what it says on the NCA website, the survey was first run in 1998), and it seems a shame that only occasional annual reports of the survey results have been formally published.

Of the paper sessions, I was particularly interested in several examples of participatory archive projects.  The examples given in the Digital Cultural Communities session – in particular Donghee Sinn’s outline of the No Gun Ri massacre digital archives and Vivian Wong’s film-making work with the Chinese American community in Los Angeles, together with Michelle Caswell’s description of the Cambodian Human Rights Tribunal in the session on Renegotiating Principles and Practice – reinforced my earlier conviction that past trauma or marginalisation may help to promote user-archives collaboration, and provide greater resilience against (or perhaps more sophisticated mechanisms for resolving) controversy.  However, Sue McKemmish and Shannon Faulkhead, in their presentations about another previously persecuted grouping, Australian Aboriginal natives (the Koorie and Gundjitmara communities specifically), gave me hope that the participatory attitudes of the Indigenous communities are just an early precursor to a much wider social movement which puts a high value upon co-creation and co-responsibility for records and record-keeping.  [Incidentally, if you have access, I see that Sue and Shannon’s Monash colleague Livia Iacovino has just published an article in Archival Science entitled Rethinking archival, ethical and legal frameworks for records of Indigenous Australian communities: a participant relationship model of rights and responsibilities, which looks highly pertinent – it’s currently in the ‘online first’ section]  I was also interested in Shannon’s comments about developing a framework to incorporate or authenticate traditional oral knowledge as an integral part of the overall community ‘archive’ (I’m not quite sure I’ve got this quite right, and would like to chat to her further about it).  William Uricchio has remarked of contemporary digital networks that “Decentralized, networked, collaborative, accretive, ephemeral and dynamic… these developments and others like them bear a closer resemblance to oral cultures than to the more stable regimes of print (writing and the printing press) and the trace (photography, film, recorded sound)”¹.  What can we learn from oral culture to inform our development of participatory practice in the digital domain?

Carlos Ovalle gave a useful paper on Copyright Challenges with Public Access to Digital Materials in Cultural Institutions in the Challenges/Problems in Use, Re-use, and Sharing session, which was interesting in the light of the UK Digital Economy Act and recent amendments to UK Copyright legislation, and some of my own current concerns about digitisation practices and business models in UK archives.

I cannot say I particularly enjoyed the plenary sessions and ensuing discussions.  I found the whole dispute about whether archival ‘science’ could, or should, be considered inter-disciplinary or multi-disciplinary, and which disciplines are core or which are peripheral, somewhat sterile and frankly rather futile.  Some of the arguments seemed to stand as witness to a kind of professional identity crisis, undermining any claim that archival research might have to a wider relevance in the modern world.  I was particularly surprised at how controversial ‘collaboration’ seemed to be in a US research context – a striking contrast I felt to the pervasive ‘partnership’ ethos that is accepted best practice in fields with which I am familiar in the UK.  Not just, I think, because I worked for what is in many ways a pioneering partnership of local authorities at West Yorkshire Joint Services; the current government policy on archives, Archives for the 21st Century similarly emphasises the benefits and indeed necessity (in the current economic climate) of partnership working in a specific archives context.

Sadly, there doesn’t seem to have been much blogging about AERI, but you can read one of the Australian participant’s Lessons from AERI Part I (is there a part II coming soon, Leisa?!).  I’ll link to any further blog posts I notice in the comments.

Finally, nothing to do with AERI, but I’ve finally got round to registering this blog with technorati and need to include the claim code in a post, so here goes: CF2RCBCUPWQC.

¹Uricchio, W. ‘Moving Beyond the Artifact: Lessons from Participatory Culture’ in Preserving the Digital Heritage Netherlands National Commission for UNESCO, 2007.  <http://www.knaw.nl/ecpa/publ/pdf/2735.pdf>

Read Full Post »

Older Posts »