Life on a stick

Last week, I installed the latest Fedora (Red Hat) Linux operating system on a 2GB USB stick. The installation instructions were clear and, using a fast USB stick and PC, runs very well. There’s a nice Windows application that installs Fedora with just a few clicks. I use Ubuntu Linux at home and this was a nice way to try out Fedora and also carry around an entire Operating System with my preferred applications for use whenever I have access to a computer.

I’ve also run PortableApps from a USB stick. This allows me to run applications I use at home, such as Firefox and Pidgin, which are not available on the corporate desktop. The applications run isolated on the USB stick and all settings and cached data is preserved and taken away when you unplug it. The applications run well and look and feel like an application installed on the PC. Only certain applications have been ‘ported’ to Portableapps, but there’s a good selection.

I mention these two experiences because they’re examples of how individuals can continue to personalise their learning or working environments in situations where the computing environment on offer is necessarily restricted for security or support reasons. As software is increasingly running on the web, accessible from any browser, and as we continue to use computers in all aspects of life, whether at home, on the move or at work, there’s an expectation that our personal choices of preferred web browser, preferred IM client, our bookmarks, settings, saved passwords, etc. should continue to be available to us both inside and outside of institutions. JISC’s ‘In Their Own Words‘ report confirms this and apparently some employers are also acknowledging it by allowing staff to purchase their own PC equipment. It says a lot about our relationships with computer hardware and software. Not all technology has this effect on us. I’m still happy to use the work provided fridge and toaster although if I were using them constantly throughout the day, I may begin to object…

Do you wish your personalised computing environment was available to you whenever you turned on a PC?

Spaces and Places

Next week I am going to a conference in Helsinki called ‘Higher Education: Spaces and Place for Learning, Innovation and Knowledge Transfer‘.

Higher education is changing and the facilities need to support this. This conference looks at this change and how facilities need to respond. Starting with an evaluation of change as it affects higher education, it focuses on the spaces where learning, innovation and knowledge transfer happens, be it in classrooms, offices, research labs, clubs, cafes and the places between, inside or outside them.

The formal and informal exchange of knowledge takes place not only within the university but with collaborations between universities, the community and business. Spaces and places need to reflect this. Spaces and places for universities are not just confined to campuses; other types of space such as research parks are emerging too.

Clearly the conference in Helsinki ties in closely with the HEFCE funded Learning Landscapes project, of which the University of Lincoln is the lead partner. The project’s objectives are to

promote closer collaboration between academics and estates professionals in the development of new learning landscapes, so that the strengths of the traditional academic environment are not lost when new spaces are developed to foster innovative approaches. It aims to develop a high-level framework, pathways and tool set to facilitate the dialogue between HEI senior academic managers and their estates directors concerning the future direction of teaching and research practice and its implications for the built estate. Process tools will be piloted at steering group institutions and a training programme developed.

Mike Neary, Head of CERD and Project Manager of the Learning Landscapes project is unable to go to Helsinki and I was fortunate to be able to take his place and accompany another colleague from CERD. I’ll be blogging from the conference and intend to post some video of the site visits when I return.

EPrints Session and OR08 Reflections

Back in the office, following a week away at the Open Repositories conference.

The last couple of days were spent in EPrints sessions, as that is the repository software we use here at Lincoln. I found the first session most interesting as the new features in EPrints 3.1 were discussed. The linked page explains in detail the changes in v3.1, but in summary they provide much more control for repository managers through a web interface, rather than editing config files directly. Les’ slides give a nice overview.

The following session on EPrints and the RAE generally reflected the experience we’ve had using EPrints 2 for the RAE last year.

A session on repository analytics was a very useful overview of using Google Analytics, AWStats and IRStats to measure the various uses of an EPrints repository. Very useful, in particular IRStats which has been developed at Southampton for EPrints. I look forward to installing it.

The final sessions were mainly aimed at developers with a knowledge of Perl. I found the session on how to write plugins for EPrints 3 clear and interesting, but not especially useful as I don’t understand Perl. Still, it was obvious, even to me, that with a basic knowledge of programming, plugins could be written quite easily. I think it’s important for repository managers to immerse themselves in the technicalities of repository development even if they don’t understand much of the detail. Just by sharing ideas and questions with developers, you get a better understanding of what is involved in rolling out new features and a sense of what can be achieved within given resources.

On the whole, the conference leaned towards the technical rather than the strategic and managerial aspects of institutional repositories. There were a lot of developers present and the number of technical projects discussed seemed high. Personally, I appreciated this and came away with a good sense of where the development of repositories is going. It would have been good to have had an event which explicitly aimed at bringing both developers and repository staff together.

Finally, I do wonder whether the open access repository community would benefit from engaging with developments in Enterprise Content Management, as there is a great deal of overlap, having to face similar issues around workflow, IPR and technical standards. Perhaps there are universities evaluating the open source Alfresco ECMS as a repository platform. If so, I’d like to hear about them.

Next year, the conference is in Atlanta, USA.

Session 7: Usage

This part of the conference ended with two excellent and very different presentations on measuring the usage and impact of scholarly output.

Tim Brody, from the University of Southampton, discussed his work developing IRStats, a tool to measure the use and impact of open access repositories. IRStats has been developed to answers questions such as, “What is Professor Smith’s most downloaded paper?” and “Who is the most highly downloaded author in Mathematics?” Existing tools such as Google Analytics and AWStats, don’t offer this level of detail, which can be useful for both strategically placing the repository as an important tool in the University and as a service to both individual scholars and departments. IRStats is available for EPrints and I intend to try it in our repository.

The final presentation was by Johan Bollen, from the Los Alamos National Laboratory. He took off from where Tim left us and discussed a much larger scale project called MESUR. This project also attempts to measure the impact of scholarly output by analysing metrics from usage data. It differs to the IRStats project in both its methodology and scale, combining the evaluation of usage, citation and bibliographic data. By analysing this data, they’ve produced some fascinating graphs which show the relationships between academic disciplines. This is a project I look forward to learning more about.

As I mentioned, this was the last session in this part of the conference. The next day-and-a-half, I will be attending an EPrints User Group Session, where I hope to learn more about the new version of EPrints, the experience people had of the RAE excercise and repository analytics. There’s also a couple of training and support sessions which will be useful.

Session 5: Legal

Grace Agnew, from Rutgers Universities Library, presented over 40 PPT slides on Digital Rights Management. Her book is due to be published later this year. It’s still not clear to me why we need DRM in open access repositories. Surely this conference is an opportunity to promote the benefits of Copyleft. A simple way of managing the rights to academic research, which costs nothing, is to attach a Creative Commons license to the work. It’s what software developers, with the similar GPL license have been doing for 20 years with great reward. Of course, this ignores the issue that for most academics, the IPR ultimately belongs to the University and it’s at this level that the discussion needs to be had. An academic who deposits their work in a repository and chooses to license their work under a Creative Commons license, may be forgetting that they do not own the work in the first place. In practice, academics are usually free to publish their work as they wish, but the explicit application of a Copyleft license is, unfortunately, not a guaranteed right.

Next, Brian Fitzgerald, from QUT Law School, discussed the OAK Law Project. The project looks fascinating and notably links to the Creative Commons initiative in Australia. It’s a shame that Brian didn’t talk about that and its relevance to open access repositories.

Finally, Jenny Brace, from the Version Identification Framework Project, presented the results of their project. By this point, the microphone in the auditorium had stopped working and I couldn’t hear very much, which was a shame, as it’s an important and interesting area of study, something which I’ve had to deal with ever since working in Collections Management at the NFTVA, where the correct ‘versioning’ of TV and Film materials was a constant issue. ‘Version’ means different things to different groups of people.

Session 4: National & International Perspectives

Arjan Hogenaar & Wilko Steinhoff, from KNAW, gave a presentation on AID, a Dutch Academic Information Domain. I’ll be honest and admit I didn’t pay much attention to this as I was writing up my blog notes for Session 3. Follow the hyperlinks for more information.

I was able to concentrate on the next two presentations which were both interesting and relevant to our work at Lincoln. The first was by Chris Awre, from the University of Hull, who is working on the EThOS project, a joint project between several HE institutions and the BL. It’s a project to provide a central repository service for e-theses produced in the UK. The idea is that the BL will harvest e-thesis specific UK ETD metadata provided by University repositories to create a single point of access to this type of academic output. Interestingly, the business model for this is a subscription service, whereby universities are expected to pay for the harvesting of metadata and digitisation of hard copy theses when they are requested. The content is Open Access (search, download), financially supported by a paid-for harvesting and digitisation service. It’s always interesting to see how people are creating new business models based on freely giving a product away. I hope it’s a success.

The third presentation was by Vanessa Proudman, from Tilburg University and the DRIVER Project. This was excellent, not least because of the rare clarity of presentation but also because the research findings are directly relevant and useful to us at Lincoln as we embark on establishing a repository service in the University. Vanessa looked at the challenges we face in populating our repositories and suggested key methods of increasing the number of deposits, noting that even with a Mandate, the deposit rate is only 40-60%. This work is published as part of a new book (chapter 3), which, naturally, can be downloaded here. Upon return to work, I intend to look at this in detail and begin drafting a plan for the next phase of our repository project, which is to establish an Open Access Mandate at the University and begin the important advocacy work within the Faculties.

Session 3: Interoperability

The final three presentations of the day focussed on interoperability. The first two, specifically discussed ways to make it easier for users to deposit materials into repositories. Julie Allinson, from the SWORD Project, discussed the work they have done and the use of the Atom Publishing Protocol as a framework for developing a derivative SWORD deposit profile. The presentation finished by noting that a NISO standard and tools are being developed for this same purpose and it is hoped that they take into consideration the work done by the SWORD project.

Scott Yeadon, from the Australian National University, gave a presentation on tools which the RIFF Project have developed for DSpace and Fedora to facilitate easier deposit of content into these repositories. Their work took real world examples of content to deposit and developed a submission service, a METS content packaging profile and dissemination service.

Both the SWORD and RIFF Projects demonstrated working examples of their services, albeit in early form. The main question remaining is whether they will be adopted beyond the confines of the project. Part of project work is research and development, but a significant part is also the marketing of the results of the project, for which OR2008 is clearly an important venue.

Finally, Dean Krafft, from Cornell University, presented NCore, a wide range of open source tools for creating digital repositories. Much bigger in scale than the previous two projects, the NCore platform is notable for being released on Sourceforge as a community project. It also has guaranteed funding until 2012, suggesting that even greater work is to come. It’s basically a suite of software tools and services built around the Fedora repository, developed to manage millions of objects, initially at the National Science Digital Library (NSDL). It was an excellent presentation of what appears to be a successful project and set of products. Building a Fedora repository requires a higher investment of resources than installing DSpace or EPrints and projects which use this platform, although often complex and difficult, tend to produce very interesting and impressive results.