Storytlr: Make your social networking tell a story

Storytlr is a relatively new ‘lifestreaming’ service that allows you to aggregate your activity on a growing number of social networking sites  (and other sites that provide RSS feeds) into one single stream that can then be manipulated to create visual narratives within a given time period.  There are other lifestreaming and aggregation services. FriendFeed is one. I use the WordPress Lifestream plugin on another blog, too.

There are several things I especially like about Storytlr that are worth highlighting here:

  • Manipulate the stream: You can edit the title, text content, date and time of each item in the stream, make items private or the entire stream private.
  • Visual Narratives: Create ‘stories’ from isolated feeds within a certain time frame. For example, I might go to a conference and use this blog to report back to my colleagues. However, using Storytlr, I might include Twitter, Flickr and YouTube posts to create a narrative over two or three days. However, I’m probably also using Twitter to keep in touch with other conference participants; things like what time to meet up for a beer or to ask where a presentation is when I have forgotten the room number. Stuff that I wouldn’t necessarily want to include in my report of the conference. Storytlr will allow me to create this conference report selecting specific items from the Twitter, Flickr, YouTube and blog feeds. You can see how this could also be used by students (or staff) who want to tell the story of a project they are working on, or a field trip they’re away on. Several people could share and post to the same account.
  • Some feeds are pulled in realtime: Storytlr uses GNIP to import updates from Twitter, Digg, Delicious and Seesmic in realtime. Increasingly, there’s an expectation that our online activity will show in realtime. RSS/Pull is being replaced by XMPP/Push architectures such as GNIP. No more waiting for RSS feeds to refresh! Watch for news sites like the BBC to start offering realtime news updates using GNIP or similar.
  • Backup to plain text: You can backup/download each of your feeds in their entirety at any time as CSV files.
  • Custom CSS and domain names: It’s your story so why not host it under your domain name in a theme that you have designed?
  • You can share stories on external sites: Once you’ve created a story or aggregated your lifestream, you can then embed it on other sites using Storytlr widgets.
  • Edit, archive, search and republish your lifestream: I use Delicious and Google Reader’s Shared Items to bookmark web pages that I want to share or, more often, bookmark to read at a later date. Storytlr provides a way to aggregate these items, archive them by month and search through them. Nice.
  • Support for Laconica microblogging sites: They support my personal installation of Laconica. It’s the first time I’ve seen this. Support for Identica is growing but it’s nice to see support for other Laconica installations. It’s a distributed microblogging application after all!
  • Forthcoming: It’s early days. They have plans for lots of other features, which users can vote for. Their blog is worth reading, too.

A few issues

  • Login is not secure: There’s no https or lock icon in my browser when I log in and there’s only two of us voting for this feature to be implemented!
  • Home-made: It’s self-financed and being developed by two blokes in their spare time from the living room.
  • Speed: It’s a bit slow. A search through your feeds can take a while. However, the good news is that they’re moving to new servers at this end of January, which should resolve this.

Interested? Here are links to my lifestream and a test story of notes from my christmas break.

Microformats and Firefox

When I have time, I like to read about new and developing web standards and specifications. Sad, you might think, but it’s a way of learning about some of the theoretical developments that eventually turn into practical functionality for all users of the Internet.  Also, I am an Archivist (film, audiovisual, multimedia) by trade, and am somewhat reassured by the development of standards and specifications as a way of achieving consensus among peers and avoiding wasted time and effort in managing ‘stuff’.

So, while poking around on Wikipedia last night, I came across ‘Operator‘, an add-on for Firefox that makes part of the ‘hidden’ semantic web immediately visible and useful to everybody. If you’re using Firefox, click here to install it. It’s been available for over a year now and is mature and extensible through the use of user scripts.  It’s been developed by Michael Kaply, who works on web browsers for IBM and is responsible for microformat support in Firefox.

Operator leverages microformats and other semantic data that are already available on many web pages to provide new ways to interact with web services.

In practice, Operator is a Firefox tool bar (and/or location/status bar icon) that identifies microformats and other semantic data in a web page and allows you to combine the value of that information with other web services such as search, bookmarking, mapping, etc. For example, this blog has tags. Operator identifies the tags and then offers the option of searching various services such as Amazon, YouTube, delicious and Upcoming, for a particular tag.  If Operator finds geo-data, it offers the option of mapping that to Google Maps and, on this page for example, it identifies me as author and allows you to download my contact details, which are embedded in the XHTML. Because it is extensible through user-scripts, there are many other ways that the microformat data can be used.

Of particular interest to students and staff are perhaps the microformat specifications for resumes and contact details. Potentially, a website, properly marked up (and WordPress allows for some of this already), could provide a rich and useful portfolio of their work and experience which is semantically linked to other services such as Institutional Repositories or other publications databases where their work is held.

After using it for a few hours, I now find myself disappointed when a website doesn’t offer at least one piece of semantic data that is found by Operator (currently, most don’t but some do). Microformat support will be included (rather than an add-on) in Firefox 3.1 and IE 8, so we can expect to see much more widespread adoption of it. A good thing.

There’s a nice demonstration of microformats here, using the Operator plugin.

The Virtual Studio

I am in Venice to present a paper with two colleagues from the School of Architecture, at a two-day conference organised by the Metadata for Architectural Materials in Europe (MACE) Project. Yesterday was a significant day, for reasons I want to detail below. Skip to the end of this long post, if you just want to know the outcome and why this conference has been an important and positive turning point in the Virtual Studio project.

I joined the university just over a year ago to work on the JISC-funded LIROLEM Project:

The Project aimed to lay the groundwork for the establishment of an Institutional Repository that supports a wide variety of non-textual materials, e.g. digital animations of 3-D models, architectural documentation such as technical briefings and photographs, as well as supporting text based materials. The project arose out of the coincidental demands for the University to develop a repository of its research outputs, and a specific project in the school of Architecture to develop a “Virtual Studio”, a web based teaching resource for the school of Architecture.

At the end of the JISC-funded period, I wrote a lengthy summary on the project blog, offering a personal overview of our achievements and challenges during the course of the project. Notably, I wrote:

The LIROLEM Project was tied to a Teaching Fellowship application by two members of staff in the School of Architecture. Their intentions were, and still are, to develop a Virtual Studio which compliments the physical design Studio. Although the repository/archive functionality is central to the requirements of the Virtual Studio, rather than being the primary focus of the Studio, a ‘designerly’, dynamic user interface that encourages participation and collaboration is really key to the success of the Studio as a place for critical thinking and working. In effect, the actual repository should be invisible to the Architect who has little interest, patience or time for the publishing workflow that EPrints requires. More often that not, the Architects were talking about wiki-like functionality, that allowed people to rapidly generate new Studio spaces, invite collaboration, bring in multimedia objects such as plans, images and models, offer comment, discussion and critique. As student projects developed in the Virtual Studio, finished products could be archived and showcased inviting another round of comment, critique and possibly derivative works from a wider community outside the classroom Studio.

Our conference paper discussed the difficulties of ensuring that the (minority) interests of the Architecture staff were met while trying to gain widespread institutional support and sustainability for the Institutional Repository which the LIROLEM project aimed, and had an obligation, to achieve. During the presentation (below), we asked:

Can academics and students working in different disciplines be easily accommodated within the same archival space?

Our presentation slides. My bicycle is a reference to Bijker (1997)

The paper argues that advances in technology result from complex and often conflicting social interests. Within the context of the LIROLEM Project, it was the wider interests of the Institution which took precedence, rather than the minority interests of the Architectural staff.  I’m not directing criticism towards decisions made during the project; after all, I made many of them so as to ensure the long-term sustainability of the repository, but yesterday we argued that

architecture is an atypical discipline; its emphasis is more visual than literary, more practice than research-based and its approach to teaching and learning is more fluid and varied than either the sciences or the humanities (Stevens, 1998). If we accept that it is social interests that underlie the development of technology rather than any inevitable or rational progress (Bijker, 1997), the question arises as to what extent an institutional repository can reconcile architectural interests with the interests of other disciplines. Architecture and the design disciplines are marginal actors in the debate surrounding digital archive development, this paper argues, and they bring problems to the table that are not easily resolved given available software and that lie outside the interests of most other actors in academia.

Prior to the conference, I was unsure of what to do next about the Virtual Studio. I felt that the repository was the wrong application for supporting a collaborative studio environment for architects. Central to this was the unappealing deposit and cataloguing workflow in the IR and the general aesthetic of the user interface which, despite some customisation, does not appeal to designers’ expectations of a visual tool for the deposit and discovery of architectural materials.

However, the MACE Project appears to have just come to our rescue with the development of tools that query OAI-PMH data mapped to their LOM profile, enriches the harvested metadata (by using external services such as Google Maps and collecting user generated tags, for example) and provides a social platform for searching participating repositories. I managed to ask several questions throughout the day to clarify how the anticipated architectural content in our repository could be exposed to MACE.  My main concern was our issue of having a general purpose Institutional Repository, but wanting to handle subject-specific (architecture) content in a unique way. I was told that the OAI-PMH has a ‘set‘ attribute which could be used to isolate the architectural content in the IR for harvesting by MACE. Another question related to the building of defined communities or groups within the larger MACE community (i.e. students on a specific course) and was told that this is a feature they intend to implement.

Because of the work of MACE, the development of a search interface and ‘studio’ community platform has largely been done for us (at least to the level of expectation we ever had for the project). Ironically, we came to the conference questioning the use of the IR as the repository for the Virtual Studio, but now believe that we may benefit from the interoperability of the IR, despite suffering some of its other less appealing attributes. One of the things that remains for us to do, is improve the deposit experience to ensure we collect content that can be exposed to the MACE platform.

For this, I hope we can develop a SWORD tool that simplifies the deposit process for staff and students, reducing the work flow process down to the two or three brief steps you find on Flickr or YouTube, repositories they are likely to be familar with and judge others against. User profile data could be collected from their LDAP login information and they would be asked to title, describe and tag their work. A default BY-NC-ND Creative Commons license would be chosen for them, which they could opt out of (but consequently also opt out of MACE harvesting, too).

Boris Müller, who works on the MACE project, spoke yesterday of the “joy of interacting with [software] interfaces.” This has clearly been a central concern of the MACE project as it has been for the Virtual Studio project, too. I’m looking forward to developing a simple but appealing interface that can bring at least a little joy to my architect colleagues and their students.

ALT-C 2008: A different approach.

Today, I took a different approach to the conference and relaxed. I usually take the approach of trying to attend as many sessions as possible and absorb and report back on as much as I can.  However, I’ve found that this approach quickly leaves me exhausted and somewhat removed from the rest of the conference as it allows little time for reflection.

So, my third day in Leeds was a much more enjoyable and stimulating one as I attended sessions, picking up on one or two things that were being presented and following threads and tangents that I found online and from talking with people.  One term that I’ve heard mentioned a few times is ‘lifestream’, that is, an aggregation of online activity into a timeline that can be shared with others. You can see my lifestream by going to this page. You’ll see that following a conversation I had at F-ALT08, I looked again at OpenID and setup my own personal website as an OpenID server, learning a great deal at the same time.

You can also see that I joined identi.ca, an open source microblogging site like Twitter, and found details on setting up Laconica, the software behind identi.ca, on my own server and potentially, the Learning Lab. My experience using Twitter at the conference has really demonstrated the value of microblogging within a defined community as a way of rapidly communicating one-to-many messages and engaging in large asynchronous conversations.

In the morning Digital Divide Slam session, we formed small groups and with two people I’d met previously at the fringe events, created a ‘performance’ that reflected on a form of digital divide. We chose ‘gender’, and produced this (prize winning) video which is now on YouTube.

During the second keynote, I drifted off and began to think about e-portfolios and aggregating our online social activity into a profile/portfolio that is controlled by the individual and is dynamically updated. I’d heard about the Attention Profiling Markup Language (APML), and spent time considering whether this could be used or adapted for aggregating a portfolio of work and experience. APML is primarily aimed at individuals’ relationship with advertisers and at a later F-ALT session was able to discuss the suitability of APML or an APML-like standard for aggregating a portfolio of work. Consequently, I’m developing an interest in this area and in other online relationships that can be made between people (see this link, too) and the data that we generate through purposeful and serendipitous online activity.

Having listened to quite a lot of discussion about web2.0 applications over the last few days, I’m even more pleased with the decision to use WordPress as a platform for blogging, web publishing and collaboration in the Learning Lab. With WordPress, we’re able to evaluate many of the latest social web technologies and standards through their plugin system.  This flexible plugin and theming system has led to the development of an entire social networking platform based on WordPress, called BuddyPress, and because it’s basically WordPress with some specific plugins and clever use of a theme, it can use any of the available WordPress plugins to connect to Facebook, Twitter, YouTube, Flickr and other popular web 2 services.  I’m looking forward to watching BuddyPress develop.

In the evening, we attended the conference dinner at Headingley Cricket Club. It was a great location, with good food and excellent service and while sitting next to one of my digital slam partners, he showed me JoikuSpot, an application that turns a mobile phone into a wifi router. There on our dinner table, he ran Joiku on a 3G Nokia phone and provided wifi access to his iPod Touch. What a great way to share high speed network access among friends, while meeting at a cafe or park to discuss work or study.

I was impressed. The Learning Landscape had extended to the cricket ground.

Number 10 is powered by WordPress

The new website for 10 Downing Street is run on WordPress (this blogging platform) using a customised Networker 1.0 theme, and has integrated Flickr, YouTube and Twitter into the side-bar.   More information from here and here. If you thought that a blog was ‘just a blog’, think again.

It nicely demonstrates the versatility of WordPress as an all round web publishing tool that serves not just individuals but groups and teams of people.  I expand a little on the collaborative features of WordPress here.

Web Trend Map

Following their predictions in January, the Web Trend Map 3 from Information Architects, offers an interesting overview of the 300 most influential websites, illustrated along the lines of the Tokyo train map.

To get the full picture you need to either view the PDF or buy the poster.  Cast your eye over the PDF and you’ll see that among the big names that stand out are Yahoo!, MSN, Google, Wikipedia, Amazon, YouTube, eBay, WordPress and Friendster. No real surprises there.

The layout is meaningful in that the train lines correspond to different web trends and Google sits in the centre because it is “slowly becoming a metaphor of the Internet itself”. Each of the 300 sites occupy different train stations in Tokyo, depending on the current status they’re deemed to have. The cool sites can be seen in cool parts of Tokyo and likewise the boring sites (i.e. Facebook) have been moved to the boring areas of the city. The creators are clearly having fun at times, too.  Yahoo News, for example, is located in Sugamo, where old ladies go shopping, because Yahoo News “recently hijacked the online advertisement revenue of around 250 local newspapers and locked them into a binding contract. Who reads local news? Old people.”

Despite the sarcasm, it is a genuinely useful and interesting illustration of who the players are on the web and what spaces they dominate. There are also two forecast and branding plates which, as the names suggest, illustrate where the weather is turning for some sites and how certain brands are resonating with users.

It’s good to see WordPress being in the centre of it all; an open source product (which the Learning Lab runs on), not far from the centre of everything, located between the Google Vatican and the News district, on the Technology and Social Networking lines.  The popularity of WordPress is no doubt due to it’s focus on usability and good presentation but also because as an open source product, it attracts a large developer community who write plugins to extend the basic functionality of the blogging platform, making it attractive to people who want their blog to integrate with sites like Facebook, Bebo, YouTube, Flickr and Twitter. WordPress leverage this voluntary manpower by enhancing their commercial product.  Integration between sites is key as each compete for our time so it’s not surprising that dataportability.org, despite being a recent initiative, sits in the Brains district among all the big players.

The DataPortability Project is a group created to promote the idea that individuals have control over their data by determing how they can use it and who can use it. This includes access to data that is under the control of another entity.

In practice, this means that we should expect to be able to login to WordPress, select images from our Flickr account and publish them in a blog to Facebook, painlessly and securely. Web applications, including those sold to the Education market, that inhibit the secure but effortless portability of data are digging themselves into a hole.

Session 2: Social Networking

Carrying on from the morning’s Web 2.0 session, in the afternoon I attended a session on how social networking tools are being developed for and integrated into repositories.

Jane Hunter, from the University of Queensland, discussed the HarvANA project, a system which supports and exploits repository users’ tags, comments and other annotations through the development of separate collections of user contributed metadata. It seems like an interesting and ultimately useful idea, acknowledging the ‘added value’ that user annotations can make to repository objects. Significantly,users can annotate sections of text, images and other media, allowing annotations to be created for parts of the repository object, rather than just the whole.

David Millard, from The University of Southampton, presented the Faroes project, a development of EPrints for teachers wishing to deposit learning resources. He said that their experience on previous projects had shown that users were not interested in nor required content packaging standards and that repository user interfaces needed to provide similar functionality to other repositories such as Flickr and YouTube. Their project aims to provide a simple, attractive interface to EPrints (called ‘PuffinShare’) aimed at teachers sharing documents, images and other single files (or ‘learning assets’), rather than packages of learning objects. It looked like a great project, highlighting some of the challenges we’ve faced on the LIROLEM project and one which I think we would be interested in trying. A public beta is due this summer. He pointed out that the growth of Web 2.0 is due to the popularity of personal services (Flickr, YouTube, Delicious), which also have an optional, additional social value to them, too.

Carol Minton, from the National Science Digital Library, discussed the work they have done on embedding Web 2.0 applications such as MediaWiki and WordPress, into their repository service. Essentially, they have created services that link blog articles and wiki pages to repository objects, enriching the objects with these community ‘annotations’.