Getting your Triples into Talis Connected Commons

A few days ago, I wrote about adding Triplify to your web application. Specifically, I wrote about adding it to WordPress, but the same information can be applied to most web publishing platforms. Earlier this month, TALIS announced their Connected Commons platform and yesterday they announced a commercial version of their platform for the structured storage of Linked Data. Storage is all very well, but more importantly they have an API for developers, so that the data can be queried and creatively re-used or mashed up.

So this got me thinking about JISCPress, our recent JISC Rapid Innovation Programme bid, which proposes a WordPress Multi-User based platform for publishing JISC funding calls and the reports of funded projects. This is based on my experience of running WriteToReply with Tony Hirst.

Although a service for comment and discussion around documents, one of the things that interests me most about WriteToReply and, consequently the JISCPress proposal, is the cumulative storage of data on the platform and how that data might be used. No surprise really as my background is in archiving and collections management. As with the University of Lincoln blogs, WriteToReply and the proposed JISCPress platform, aggregate published content into a site-wide ‘tags’ site that allows anyone to search and browse through all content that has been published to the public. In the case of the university blogs, that’s a large percentage of blogs, but for WriteToReply and JISCPress, it would be pretty much every document hosted on the platform.

You can see from the WriteToReply tags site that over time, a rich store of public documents could be created for querying and re-use. The site design is a bit clunky right now but under the hood you’ll notice that you can search across the text of every document, browse by document type and by tag. The tags are created by publishing the content to OpenCalais, which returns a whole bunch of semantic keywords for each document section. You’ll also notice that an RSS feed is available for any search query, any category and any tag or combination of tags.

Last night, I was thinking about the WriteToReply site architecture (note that when I mention WriteToReply, it almost certainly applies to JISCPress, too – same technology, similar principles, different content). Currently, we categorise each document by document type so you’ll see ‘Consultations‘, ‘Action Plans‘ ‘Discussion Papers‘, etc.. We author all documents under the WriteToReply username, too and tag each document section both manually and via OpenCalais. However, there’s more that we could do, with little effort, to mark up the documents and I’ve started sketching it out.

You’ll see from the diagram that I’m thinking we should introduce location and subject categories. There will be formal classification schemes we could use. For example, I found a Local Government Classification Scheme, which provides some high level subjects that are the type of thing I’m thinking about. I’m not suggesting we start ‘cataloguing’ the documents, but simply borrow, at the top level, from recognised classification schemes that are used elsewhere. I’m also thinking that we should start creating a new author for each document and in the case of WriteToReply, the author would be the agency who issued the consultation, report, or whatever.

So following these changes, we would capture the following data (in bold), for example:

The Home Office created Protecting the public in a changing communications environment on April 27th which is a consultation document for England, Wales and Scotland, categorised under Information and communication technology with 18 sections.

Section one is tagged Governor, Home Department, Office of Public Sector Information, Secretary of State, Surrey.

Section two is tagged communications data, communications industry, emergency services, Home Secretary, Jacqui Smith MP, Rt Hon Jacqui Smith MP.

Section three is tagged Broadband, BT, communications, communications changes, communications data, communications data capability, communications data limits, communications environment, communications event, communications industry, communications networks, communications providers, communications service providers, communications services, emergency services, Her Majesty’s Revenue and Customs, Home Office, intelligence agencies, internet browsing, Internet Protocol, Internet Service, IP, mobile telephone system, physical networks, public telecommunications service, registered owner, Serious Organised Crime Agency, social networking, specified communications data, The communications industry, United Kingdom.

Section four is tagged …(you get the picture)

Section five, paragraph six, has the comment “fully compatible with the ECHR” is, of course, an assertion made by the government, about its own legislation. Has that assertion ever been tested in a court? authored by Owen Blacker on April 28th 11:32pm.

Selected text from Section five, paragraph eight, has the comment Over my dead body! authored by Mr Angry on April 28th 9:32pm

Note that every author, document, section, paragraph, text selection, category, tag, comment and comment author has a URI, Atom, RSS and RDF end point (actually, text selection and comment author feeds are forthcoming features).

Now, with this basic architecture mapped out, we might wonder what Triplify could add to this. I’ve already shown in my earlier post that, with little effort, it re-publishes data from a relational database as N-Triples semantic data, so everything you see above, could be published as RDF data (and JSON, too).

So, in my simple view of the world, we have a data source that requires very little effort to generate content for and manage (JISCPress/WriteToReply/WordPress), a method of automatically publishing the data for the semantic web (Triplify) and, with TALIS, an API for data storage, data access, query, and augmentation.  As always, my mantra is ‘I am not a developer’, but from where I’m standing, this high-level ‘workflow’ seems reasonable.

The benefits for the JISC community would primarily be felt by using the JISCPress website, in a similar way (albeit with better, more informed design) to the WriteToReply ‘tags’ site. We could search across the full text of funding calls, browse the reports by author, categories and tags and grab news feeds from favourite authors, searches, tags or categories. This is all in addition to the comment, feedback and discussion features we’ve proposed, too. Further benefits would be had from ‘re-publishing’ the site content as semantic data to a platform such as TALIS. Not only could there be further Rapid Innovation projects which worked on this data, but it would be available for any member of the public to query and re-use, too. No longer would our final project reports, often the distillation of our research, sit idle as PDF files on institutional websites and in institutional repositories. If the documentation we produce it worth anything, then it’s worth re-publishing openly as semantic data.

Finally, in order to benefit from the (free) use of TALIS Connected Commons, the data being published needs to be licensed under a public domain or Creative Commons ‘zero’ licence. I suspect Crown Copyright is not compatible with either of these licenses, although why the hell public consultation documents couldn’t be licensed this way, I don’t know. Do you? For JISCPress, this would be a choice JISC could make. The alternative is to use the commercial TALIS platform or something similar.

As usual, tell me what you think… Thanks.

Open Education Project Blueprint

Each participant on the Mozilla Open Education Course, has been asked to develop a project blueprint. Here is the start of mine. It’s basically a ‘Personal Learning Environment’ (PLE) ((See Personal Learning Environments: Challenging the dominant design of educational systems))and I’m going to try to show how WordPress MU is a good technology platform for an institution to easily and effectively support a PLE. I’m going to place an emphasis on ‘identity’ because it’s something I want to learn more about.

Short description

University students are at least 18 years old and have spent many years unconsciously accumulating or deliberately developing a digital identity. When people enter university they are expected to accept a new digital identity, one which may rarely acknowledge and easily exploit their preceding experience and productivity. Students are given a new email address, a university ID, expected to submit course work using new, institutionally unique tools and develop a portfolio of work over three to four years which is set apart from their existing portfolio of work and often difficult to fully exploit after graduation.

I think this will be increasingly questioned and resisted by individuals paying to study at university. Both students and staff will suffer this disconnect caused by institutions not employing available online technologies and standards rapidly enough. There is a legacy of universities expecting and being expected to provide online tools to staff and students. This was useful and necessary several years ago, but it’s now quite possible for individuals in the UK to study, learn and work apart from any institutional technology provision. For example, Google provides many of these tools and will have a longer relationship with the individual than the university is likely to.

Many students and staff are relinquishing institutional technology ties and an indicator of this is the massive % of students who do not use their university email address (96% in one case study). In the UK, universities are keen to accept mature, work-based and part-time students. For these students, university is just a single part of their lives and should not require the development of a digital identity that mainly serves the institution, rather than the individual.

How would it work?

Students identify themselves with their OpenID, which authenticates against a Shibboleth Service Provider. ((See the JISC Review of OpenID.)) They create, publish and syndicate their course work, privately or publicly using the web services of their choice. Students don’t turn in work for assessment, but rather publish their work for assessment under a CC license of their choice.

It’s basically a PLE project blueprint with an emphasis on identity and data-portability. I’m pretty sure I’m not going to get a fully working model to demonstrate by the end of the course, but I will try to show how existing technologies could be stitched together to achieve what I’m aiming for. Of course, the technologies are not really the issue here, the challenge is showing how this might work in an institutional context.

I think it will be possible to show how it’s technically possible using a single platform such as WordPress which has Facebook Connnect, OAuth, OpenID, Shibboleth and RPX plugins. WordPress is also microformat friendly and profile information can be easily exported in the hCard format. hResume would be ideal for developing an academic profile. The Diso project are leading the way in this area.

Similar projects:

UMW Blogs?

Open Technology:

OpenID, OAuth, RPX, Shibboleth, RSS, Atom, Microformats, XMPP, OPML, AtomPub, XML-RPC + WordPress

Open Content / Licensing:

I’ll look at how Creative Commons licensing may be compatible with our staff and student IP policies.

Open Pedagogy

No idea. This is a new area for me. I’m hoping that the Mozilla/CC Open Education course can point me in the right direction for this. Maybe you have some suggestions, too?

Anytime Anywhere Computing

Together with the ITC Department, we’ve recently begun a feasibility study which looks at related areas of the university’s ITC provision. It brings together three, originally separate proposals to look at upgrading our wireless infrastructure, provide a more flexible desktop experience through virtualisation and improve our understanding of and support for Netbooks and Mobile Internet Devices. It’s good to be working so closely with our ITC Department. So often I hear people at other institutions complain about their ITC departments being ‘brick walls’ and showing no flexibility, but fortunately I can’t say that about my experience at the University of Lincoln.

The Head of ITC sent me a link to this video today. It’s a good example of why our study is both necessary and worthwhile.

[kml_flashembed movie="http://uk.youtube.com/v/uRUTtpk9EHg" width="425" height="350" wmode="transparent" /]

Related to this, Tony Hirst recently bookmarked this ITC syllabus for 13-14 year olds recently, which, together with the video, provides a clear indication of what’s happening in schools.

We’re working closely with the Student Council and Academics and intend to survey them on the issues raised by the study early in the new year.  We’ve started talking to vendors of desktop and application virtualisation ‘solutions’, too (the virtualisation of our server infrastructure is almost complete). We’re also lining up some visits to other institutions that have experience in these areas.

If you or your school, your FE or HE insititution has seriously considered or implemented desktop and/or application virtualisation, a full service wireless infrastructure (i.e. it matches the services on your wired network) and support Linux and XP-based Netbooks and other mobile devices, please do get in touch or leave a comment below.

These days are full

I am conscious that it’s been almost a month since I last wrote here but that is largely due to my work on other projects, websites and blogs.  Here’s an overview of some of the work I’m currently involved in. If you’re working on similar projects or want to discuss or collaborate on any of this, do get in touch.

The Learning Lab

I recently wrote a brief summary of the work I’ve been doing under the ‘Learning Lab’ banner, since I started my work as Technology Officer in the Centre for Educational Research and Development. WordPressMU occupied a large chunk of my summer, though I feel I have a good understanding of it now and can relax a little while supporting staff and students who wish to use it. It will soon be moving to the new, permanent home of http://blogs.lincoln.ac.uk

One of the unexpected outcomes of working on WordPressMU was the realisation that not only training but a different model of support is key to sustaining and improving the use of blogs and other Web 2.0 tools.  I’m keen to advocate and support the user-to-user support model that most open source and social web services develop rather than the traditional user-to-professional, ‘Help Desk’ model that exists for much of the software provided by the university. Models of user support are not something I’ve taken much of an interest in until recently, but the reality is that I alone am unable to support the growing adoption of WordPressMU at the university and I need to encourage staff and students to help themselves wherever possible.

Having said that, with colleagues in the Library and Research Office, I’m also planning to offer regular staff training sessions on the use of Web 2.0 tools in education and I’m visiting classes to give one hour introductions to WordPress, which is a good opportunity to work with and learn from both students and staff. In addition to this, I’m contributing towards the revision of policy documents which ensure that these new tools are used effectively and appropriately.

Lincoln Academic Commons

This is something I’m developing to promote and support the various initiatives at the university which provide Open Access to our research, teaching and learning. I started working for the university on a JISC-funded project to develop an institutional repository, having been working as an Archivist and Project Manager of a Digital Asset Management system in my previous job. Then, a few months ago, I heard about the difficulties people in the Lincoln Business School were having trying to establish a series of ‘Occasional Working Papers’ (OWPS) using existing portal software provided by the university. At the same time, I was looking at the Open Journal System for publishing Open Access journals, so I suggested that we set up the OWPS using OJS. Seeing what a great piece of software OJS is, I then suggested we use it for NEO, a planned journal of student research which we intend to launch in the Spring. Finally (and this is where it gets really interesting for me), Mike Neary, Dean of Teaching and Learning and Head of the Centre for Educational Research and Development, is advocating a more critical engagement with the debates about the marketisation of higher education through teaching practice. He’s calling this critical engagement, ‘Teaching in Public’, which encompasses the idea of an Academic Commons.

Professor Neary argues the uncertainty over the university’s mission requires the notion of ‘the public’ to be reconceptualized, so as to remake the university as an academic project that confronts the negative consequences of academic capitalism, and the commodification of everyday life. He will present Karl Marx’s concept of the ‘general intellect’ as an idea through which the university might be remade.

I contributed to a book chapter Mike has recently written which elaborates on this in more detail. You can read more about that on a previous blog post.

Access Grid

A project I’ve been leading for some months now is the installation of an Access Grid node at the university. We were fortunate in being approached by the Mental Health Research Network (MHRN) several months ago who offered to fund the installation of an AG node at the university to support their staff who work at the university and provide a facility that is otherwise missing in Lincolnshire. It’s been a really interesting and useful project for me as I learned about how the university undertakes a tendering exercise and I’ve been able to work with colleagues from across the university.  The node should be available to use sometime in January. The Access Grid project is yet another technology-based initiative at the university which further improves our research infrastructure and supports collaboration and the wider exchange of ideas among colleagues worldwide.

Anytime, Anywhere Computing

This is a new project that brings together three, originally separate proposals, that the ICT department and CERD were proposing to take forward. It covers:

  1. ubiquitous wireless networking
  2. so called ‘thin client’ technology as an alternative to desktop PCs and the management of software applications and resources
  3. access via user-owned devices, such as low-cost and increasingly popular ‘netbook’ hardware

We’re just starting to look at how we might offer the same user experience and services on our wireless network as we provide on our wired network. Currently the wireless network only offers Internet access. At the same time, we’re interested in evaluating new virtualisation technologies for the desktop. The ICT department are concluding a server consolidation project which is virtualising much of our server infrastructure. This brings many benefits and allows the ICT department to provide a more flexible service to users.  Our new study will look at whether similar virtualisation technology can bring benefits to desktop users, too. The third part of this project is based on a proposal I made a few months ago to evaluate the user experience and support issues that the new generation of ‘netbooks‘ introduces. Smaller screens, Linux operating systems, an emphasis on web-based applications and the rapid adoption of these low-cost devices often aimed at the education sector, require a better understanding of the impact of this technology and the influence it may have on driving students to use more and more web-based applications.

Are you working on similar initiatives? If so, please leave a comment and share your experiences.

The Virtual Studio

I am in Venice to present a paper with two colleagues from the School of Architecture, at a two-day conference organised by the Metadata for Architectural Materials in Europe (MACE) Project. Yesterday was a significant day, for reasons I want to detail below. Skip to the end of this long post, if you just want to know the outcome and why this conference has been an important and positive turning point in the Virtual Studio project.

I joined the university just over a year ago to work on the JISC-funded LIROLEM Project:

The Project aimed to lay the groundwork for the establishment of an Institutional Repository that supports a wide variety of non-textual materials, e.g. digital animations of 3-D models, architectural documentation such as technical briefings and photographs, as well as supporting text based materials. The project arose out of the coincidental demands for the University to develop a repository of its research outputs, and a specific project in the school of Architecture to develop a “Virtual Studio”, a web based teaching resource for the school of Architecture.

At the end of the JISC-funded period, I wrote a lengthy summary on the project blog, offering a personal overview of our achievements and challenges during the course of the project. Notably, I wrote:

The LIROLEM Project was tied to a Teaching Fellowship application by two members of staff in the School of Architecture. Their intentions were, and still are, to develop a Virtual Studio which compliments the physical design Studio. Although the repository/archive functionality is central to the requirements of the Virtual Studio, rather than being the primary focus of the Studio, a ‘designerly’, dynamic user interface that encourages participation and collaboration is really key to the success of the Studio as a place for critical thinking and working. In effect, the actual repository should be invisible to the Architect who has little interest, patience or time for the publishing workflow that EPrints requires. More often that not, the Architects were talking about wiki-like functionality, that allowed people to rapidly generate new Studio spaces, invite collaboration, bring in multimedia objects such as plans, images and models, offer comment, discussion and critique. As student projects developed in the Virtual Studio, finished products could be archived and showcased inviting another round of comment, critique and possibly derivative works from a wider community outside the classroom Studio.

Our conference paper discussed the difficulties of ensuring that the (minority) interests of the Architecture staff were met while trying to gain widespread institutional support and sustainability for the Institutional Repository which the LIROLEM project aimed, and had an obligation, to achieve. During the presentation (below), we asked:

Can academics and students working in different disciplines be easily accommodated within the same archival space?

Our presentation slides. My bicycle is a reference to Bijker (1997)

The paper argues that advances in technology result from complex and often conflicting social interests. Within the context of the LIROLEM Project, it was the wider interests of the Institution which took precedence, rather than the minority interests of the Architectural staff.  I’m not directing criticism towards decisions made during the project; after all, I made many of them so as to ensure the long-term sustainability of the repository, but yesterday we argued that

architecture is an atypical discipline; its emphasis is more visual than literary, more practice than research-based and its approach to teaching and learning is more fluid and varied than either the sciences or the humanities (Stevens, 1998). If we accept that it is social interests that underlie the development of technology rather than any inevitable or rational progress (Bijker, 1997), the question arises as to what extent an institutional repository can reconcile architectural interests with the interests of other disciplines. Architecture and the design disciplines are marginal actors in the debate surrounding digital archive development, this paper argues, and they bring problems to the table that are not easily resolved given available software and that lie outside the interests of most other actors in academia.

Prior to the conference, I was unsure of what to do next about the Virtual Studio. I felt that the repository was the wrong application for supporting a collaborative studio environment for architects. Central to this was the unappealing deposit and cataloguing workflow in the IR and the general aesthetic of the user interface which, despite some customisation, does not appeal to designers’ expectations of a visual tool for the deposit and discovery of architectural materials.

However, the MACE Project appears to have just come to our rescue with the development of tools that query OAI-PMH data mapped to their LOM profile, enriches the harvested metadata (by using external services such as Google Maps and collecting user generated tags, for example) and provides a social platform for searching participating repositories. I managed to ask several questions throughout the day to clarify how the anticipated architectural content in our repository could be exposed to MACE.  My main concern was our issue of having a general purpose Institutional Repository, but wanting to handle subject-specific (architecture) content in a unique way. I was told that the OAI-PMH has a ‘set‘ attribute which could be used to isolate the architectural content in the IR for harvesting by MACE. Another question related to the building of defined communities or groups within the larger MACE community (i.e. students on a specific course) and was told that this is a feature they intend to implement.

Because of the work of MACE, the development of a search interface and ‘studio’ community platform has largely been done for us (at least to the level of expectation we ever had for the project). Ironically, we came to the conference questioning the use of the IR as the repository for the Virtual Studio, but now believe that we may benefit from the interoperability of the IR, despite suffering some of its other less appealing attributes. One of the things that remains for us to do, is improve the deposit experience to ensure we collect content that can be exposed to the MACE platform.

For this, I hope we can develop a SWORD tool that simplifies the deposit process for staff and students, reducing the work flow process down to the two or three brief steps you find on Flickr or YouTube, repositories they are likely to be familar with and judge others against. User profile data could be collected from their LDAP login information and they would be asked to title, describe and tag their work. A default BY-NC-ND Creative Commons license would be chosen for them, which they could opt out of (but consequently also opt out of MACE harvesting, too).

Boris Müller, who works on the MACE project, spoke yesterday of the “joy of interacting with [software] interfaces.” This has clearly been a central concern of the MACE project as it has been for the Virtual Studio project, too. I’m looking forward to developing a simple but appealing interface that can bring at least a little joy to my architect colleagues and their students.

MACE Conference

On Friday, Andy, Carl and I are going to the Metadata for Architectural Contents in Europe (MACE) Conference in Venice, to present a paper which reflects on the issues raised during our JISC-funded LIROLEM Project. Here’s a Word Cloud of the paper, for those of you who don’t have the time or inclination to read it. For those of you who do, it’s in our repository, of course.

 

Image created at wordle.net

ALT-C 2008 Keynote: OLPC and the X0-1 laptop

Just a quick post.  The final keynote of the ALT-C 2008 conference was by David Cavallo, Chief Learning Architect for One Laptop Per Child. I’ll link to his presentation when it’s on the ALT website.

If you’re interested in looking at the Sugar desktop and some of the Activities which are on the OLPC X0-1, then James Munro, a student here, has been working over the summer on a related project and produced a LiveCD of Xubuntu which boots directly into Sugar with a selection of Activities to try.  You can read more about the project and download the LiveCD on our wiki.  Without an X0-1 of your own, this is probably the easiest way to try out the Sugar desktop right now.  Any comments, questions or problems, do get in touch with me or James via our blogs. Thanks!

Note: The LiveCD is still being tested. It’ll be available via the wiki soon.