Facebook’s transparent use of OpenID

There was a bit of excitement last month when Facebook became an OpenID relaying party. Many of the big names such as Yahoo!, Google, MySpace, etc. have long been providers of OpenIDs to their users, but Facebook is now accepting third-party OpenIDs as a way to login to their site. What’s even more unusual and why I’m writing this post is that it wasn’t until a couple of days ago that I noticed how they’d implemented OpenID:

Existing and new users can now link their Facebook accounts with their Gmail accounts or with accounts from those OpenID providers that support automatic login. Once a user links his or her account with a Gmail address or an OpenID URL, logs in to that account, then goes to Facebook, that user will already be logged in to Facebook.

I don’t think this brief explanation on the Facebook developer blog does justice to how this works in practice. What it means is that if you link your Facebook account to your OpenID, you will automatically be logged in to Facebook if you are logged into your OpenID provider and visit http://facebook.com On any other OpenID enabled site, you click a button or type your OpenID into a login box and are then logged in to the site you’re visiting.

With Facebook, they’ve done away with the need to enter your OpenID credentials altogether. If you’re logged in to your OpenID provider, pause for three seconds on http://facebook.com and you’ll be automatically logged in. If you log out of Facebook and then visit http://facebook.com, you’ll be automatically logged in again. It doesn’t seem to work if you visit any other Facebook URL.

So, for example, if you link your Google account to your Facebook account and, like many of us, are logged in to Google throughout the day using GMail, Google Reader, Google Docs, Google Calendar, or whatever, you never have to think about logging in to Facebook. It’s the closest to a transparent single-sign-on across consumer/social sites that I can think of.

I exchanged a few comments about this with Paul Walk on Twitter, who is less impressed by this than I am. What if you want to log out of Facebook? Really log out? You’d need to log out of your OpenID provider. What if you want to stay logged in to your OpenID provider but log out of Facebook? Why would you want to do that? For most users, I can’t think of a reason. Occasionally I want to log out of a site and ensure I’m completely logged out because I’m testing something. When that happens, I open a different browser, clear cookies and/or use the private browsing mode in Safari or Chrome. The benefits to Facebook’s approach seem to outweigh the occasions when I’d want to do this.

Other than habit, can you think of a reason why you’d want to log out of Facebook but remain logged into your OpenID provider?

Surely what’s important here is whether you are logged in to the world-wide-web or logged out of the world-wide-web. It would be more secure, surely, to know if you were logged in or logged out rather than whether you were logged in to some sites and logged out of others. If I lock my front door, I know that every room in my house has been secured. I don’t need to lock every room in the house, too. When I unlock my front door, I have the freedom to move around my house. And so do guests. This is where single-sign-on becomes potentially dangerous, because it opens up multiple services that have been otherwise protected by multiple authentication credentials. If someone else uses your browser, they have access to all your accounts. That could be useful when you and your partner share accounts on some websites, but dangerous if you leave your PC unattended or have your laptop stolen from a public library.

However, I imagine that most people on the web are using one or two weak passwords across the web services they use because they can’t remember multiple login details. Surely one good password to protect multiple accounts which is used to log in and out of multiple services is better than one or two weak passwords that are used everywhere? If I have one ‘key’ that works everywhere, I’m more likely to get into the habit of using it than I am if I have to remember to log out of multiple sites.

I know of three important blog posts about Facebook’s use of OpenID, two from Luke Shepard, the principle developer of OpenID on Facebook and another from Simon Willison. A month before Facebook implemented their ‘linked accounts’ feature, Luke Shepard was discussing some ideas about OpenID login on his blog. Now that OpenID login to Facebook has been implemented, he’s been discussing the logout process. Following on from these two posts, Simon Willison provides a key overview to the current implementation in light of the new Facebook username feature:

At any rate, their consumer implementation is fascinating. It’s live right now, even though there’s no OpenID login box anywhere to be seen on the site. Instead, Facebook take advantage of the little known checkid_immediate mode. Once you’ve associated your OpenID with your Facebook account (using the “Linked Accounts” section of the settings pane) Facebook sets a cookie remembering your OpenID provider, which persists even after you log out of Facebook. When you later visit the Facebook homepage, a checkid_immediate request is silently sent to your provider, logging you in automatically if you are already authenticated there.

It’s brilliant (well, I think so), to see how a seemingly minor part of the OpenID specification, can be turned into a significant improvement (well, I think so), to the user experience and signals the way for a transparent single-sign-on experience across the web, using an OpenID provider of your choice. I look forward to the day when I login to my OpenID provider (actually, my browser does that automatically when I start it up), and from then on, I’m transparently logged in to the sites I use across the web, until I log out of my OpenID provider. One day, I’ll log in to my browser and be logged in to all the web services I use. One day, I’ll log out of my browser and be logged out of all the web services I use.

Using Google Reader as an OPML editor and feed blender

Last week, Google announced a new feature for Google Reader that is worth noting here, if only because it will make my work a little easier. They’ve introduced the idea of ‘bundles’ of feeds that anyone can create and share via Google Reader, email, OPML or as an Atom feed. There was a bit of confusion at first about what happened after you create a bundle and shared it. Dave Winer, based on an exchange with Kevin Marks, thought that the bundles were dynamic ‘reading lists’ based on a proprietary format. This isn’t the case but it’s worth reading Dave Winer’s original post with comments, and his follow up post which clarifies what reading lists are (technically, they’re ‘subscription lists‘ – part of the OPML 2.0 specification).

Anyway, what Google has introduced in this update to their feed reader is still very useful and functionally quite similar to the reading list concept. It allows me to group multiple feeds into a set/reading list/bundle and then share that set of feeds with my Google Reader ‘friends’, email a link to a web page of that bundle or download an OPML file of the bundle. This last feature is particularly cool because it means your bundle is portable via the OPML open standard and can be shared beyond Google Reader.

Build your bundle
Ways to share it
Email a link
Read it, subscribe to the feed or download the OPML

Effectively, Google Reader has become a simple OPML editor, allowing anyone to gather feeds together and export them as an OPML file. Even better, your bundle is also available as a ‘blended’ Atom feed, achieving something similar to Dave Winer’s notion of a dynamic ‘reading list’ where the creator of the bundle can add or remove feeds and the Atom feed is dynamically updated to reflect those changes. Until now it was a bit of a hassle to create a blended feed from multiple sources. Yahoo! Pipes is a powerful way of doing it but Pipes isn’t for everyone and I’ve found the feeds it produces are not always available and compatible with other feed reading applications. Recently, I’ve been creating ‘digests’ in feed.informer, but I’m more inclined to use Google Reader now as it’s where I do all my feed reading and I know the application well. Note that you don’t have to remain subscribed to the feed in Google Reader in order for a bundle to remain persistent either. You can create a bundle from feeds you later unsubscribe from in your reader and the feeds are not deleted from the bundle.

There are two obvious uses for all of this. First, a teacher could bundle a reading list of feeds and share them with students via Google Reader, as a simple web page, an OPML file or dynamic Atom feed. Second, using the Atom output, it’s now easy for anyone to create a lifestream feed of all their activity on the web and embed it on their web page or just archive it in Google Reader or elsewhere.

The user is in control

Just a quick nod to Andy Powell’s post yesterday about Identity in a Web 2.0 World. As I’ve said before, I’m trying to catch up with the issues Andy discusses and develop them into a blueprint for the Mozilla/Creative Commons/P2P University Open Education course, I am participating in.

Andy writes:

…identity in a Web 2.0 world is not institution-centric, as manifest in the current UK Federation, nor is it based on the currently deployed education-specific identity and access management technologies.  Identity in a Web 2.0 world is user-centric – that means the user is in control…. The important point is that learners (and staff) will come into institutions with an existing identity, they will increasingly expect to use that identity while they are there (particularly in their use of services ‘outside’ the institution) and that they will continue using it after they have left.  As a community, we therefore have to understand what impact that has on our provision of services and the way we support learning and research.

I am therefore reassured that my blueprint outline is not completely off the wall:

University students are at least 18 years old and have spent many years unconsciously accumulating or deliberately developing a digital identity. When people enter university they are expected to accept a new digital identity, one which may rarely acknowledge and easily exploit their preceding experience and productivity. Students are given a new email address, a university ID, expected to submit course work using new, institutionally unique tools and develop a portfolio of work over three to four years which is set apart from their existing portfolio of work and often difficult to fully exploit after graduation. I think this will be increasingly questioned and resisted by individuals paying to study at university.

My proposal is to show there are existing technical solutions which would allow an individual to register as a student at a university, provide the institution with their Facebook, Google, Yahoo!, OpenID, etc. identification and from then on, the student uses their existing ID to authenticate against any university online resource. There’s an example of how this might happen in the JISC Review of OpenID, which describes one of the project aims as the development of

bridging software that will allow OpenIDs from any source to be used as identities within the production UK (SAML) federation.

The University of Kent host a demonstrator of this OpenID-to-Shibboleth bridge.

The other aspect of my blueprint is institutional support of a Personal Learning Environment (PLE). I am suggesting that the WordPress Multi User platform is one technology that could support the characteristics of a PLE, being: ((Taken from, Personal Learning Environments: Challenging the dominant design of educational systems. Scott Wilson, Prof. Oleg Liber, Mark Johnson, Phil Beauvoir, Paul Sharples & Colin Milligan, University of Bolton. 2006))

  • Focus on coordinating connections between the user and services
  • Symmetric relationships
  • Individualized context
  • Open Internet standards and lightweight proprietary APIs
  • Open content and remix culture
  • Personal and global scope

The PLE implementation which I have in mind is not, like the VLE, a monolithic system but rather a platform which aggregates and co-ordinates external user-centric services into a coherent learning environment. It is a parasitic system, feeding off content from existing online services such as blogs, social bookmarking, wikis and social networks, but also a rewarding environment which supports and develops the student’s existing portfolio ((In many ways, I am thinking of ‘Identity’ and ‘Portfolio’ as being largely synonymous during the student’s period of study.)) throughout their period of study.

I’ve shown how WordPress can aggregate and archive course activity, how it can enhance the discovery and connectivity of an individual’s and institution’s online profile through the addition of semantic-web-enabling plugins, how it can syndicate filtered content to other internal and external systems (through the use of feed2js, it can also syndicate content to legacy systems like Blackboard, which don’t support embedded web feeds). I’ve also shown that it can support a lightweight social network that integrates with an institution’s LDAP/Active Directory authentication system, and that social network can be OpenID enabled, allowing users to optionally link their OpenID to their WordPress/LDAP account and login via OpenID instead. ((I’ve tested this with DiSo’s OpenID plugin, which works in principle, but I suspect that once set up, the OpenID login for the specified account, completely bypasses the LDAP authentication. Surely just a small amount of development would provide tighter integration. Incidentally, a Shibboleth plugin (by the same author of the OpenID plugin) for WordPress also exists.))

Finally, the institutional and wider benefits to the public can be found when the cumulative data of the platform is itself aggregated into a structured site that enables discovery and re-use of content. An example of this is our Community Posts site, and I have also previously discussed the potential development and exploitation of this resource. Designed and licensed carefully, such a site could provide open educational resources at both user and programmatic levels.

So what empowers the user/student and puts them in control? Data-Portability and Creative Commons licensing? ((Actually, I’m starting to think that CC licensing is little more than an interim step to a better understanding of ‘data’. See ‘You don’t nor need to own your data‘ When knowledge is transmitted online, every aspect of its representation is in a form of data. Both information and instruction become ‘data’ – isn’t it backwards to think of knowledge in terms of something ‘owned’ Do you think of instructional methods as ‘yours’?)).

Developing BuddyPress for education

In February, I wrote a brief post about setting up BuddyPress with LDAP authentication within a university context (you’re looking at it). Four days ago, BuddyPress reached maturity by hitting version 1.0, marking a time to reflect on what I’d like to see developed for BuddyPress for use within a university context. This is an initial wish list. I’m not looking for BuddyPress to be an all singing, all dancing, social network. I don’t care about image collections and status updates (Flickr and Twitter do those jobs nicely) I would, however, like to see it being used for building group identity (projects, special interest groups, classes, courses) and portfolio/resume building. Right now, it’s pretty limited in those areas.

Privacy controls

As I mentioned previously, our social network is private, while the blogs have five levels of optional privacy controls, ranging from public and indexed by Google, to private, single-user blogs. However, privacy within the social network is currently all or nothing. It’s a hack that works but has no flexibility. The BuddyPress activity plugin is currently turned off because the privacy plugin I use, doesn’t account for the feeds that the plugin exposes. It would be nice to be able to have the site-wide social activity visible when logged in. Currently, only information about new blog posts is published site-wide. What I would like is for everything that the activity plugin logs, to have site admin options to be 1) visible to non-logged in users/public; 2) visible to logged in users; 3) visible to my groups and friends,4)  visible to my friends and 5) not visible. In addition, the feeds that are exposed of site-wide activity and member activity, could also be configurable so that 1) a site admin can choose to expose them or not; 2) if allowed, a member can choose to expose their personal feed or not; 3) a feed key could be used in place of the normal feed URI so that private member feeds could be created. Finally, groups and member profiles could optionally be made public or private. So anything following /groups/ or /members/ has an option to be visible outside the community.

Group activity

Currently, groups don’t publish very much information and you can’t aggregate information from elsewhere into a group profile. I submitted a ‘wishlist’ ticket to BuddyPress for group activity feeds, requesting that feeds for when a new group member joins and changes to the group wire. It would also be nice to be able to aggregate content from other sites via RSS into the group ‘news’ field, or a new lifestream-like field so group photos or videos or whatever, could be sucked in. It was possible to do this via a Yahoo! Pipe which combined various feeds which could then be put through feed2js and dropped into the ‘News’ field. However, embedded javascript is now intentionally blocked 🙁 I guess I could find a work-around.

Member profiles

For both teachers and students, the profile pages could be effective resumes. Currently, the site-admin can build basic grouped fields and there’s a choice of field types, too. I’d like members to be able to build their own fields and for there to be pre-built field types to choose from. It’s possible for the site admin to pre-build fields and probably easy enough for me to pre-build specific fields to design a resume (the examples given of language, country and state are just .csv lists). However, currently, if I provide three ‘Employment’ fields, a member can’t add a fourth ‘Employment’ field, nor can they select dates to correspond to when that employment was. I’m pretty sure I could create the fields, but it’s beyond me to allow a member to build their own profile pages from a selection of pre-built fields.

Finally, in addition to my request for members to be able to make their profiles public, I’d like the member profile to be marked up with hResume markup and exportable in a variety of styled formats: xhtml+css, xml, pdf, txt, doc and rtf.

The entire member profile should use microformat markup where possible. Currently, the profile can export a simple, personal hCard but could also use hCard for company and school addresses, hCalendar for dates, and rel=”tag” for creating a set of tagged skills. LinkedIn partially implements this, by the way.

So, privacy controls, group feeds and a resume builder. Not too much to ask is it? I’d probably be able to pay for the resume builder if anyone is interested…

Open Education Project Blueprint

Each participant on the Mozilla Open Education Course, has been asked to develop a project blueprint. Here is the start of mine. It’s basically a ‘Personal Learning Environment’ (PLE) ((See Personal Learning Environments: Challenging the dominant design of educational systems))and I’m going to try to show how WordPress MU is a good technology platform for an institution to easily and effectively support a PLE. I’m going to place an emphasis on ‘identity’ because it’s something I want to learn more about.

Short description

University students are at least 18 years old and have spent many years unconsciously accumulating or deliberately developing a digital identity. When people enter university they are expected to accept a new digital identity, one which may rarely acknowledge and easily exploit their preceding experience and productivity. Students are given a new email address, a university ID, expected to submit course work using new, institutionally unique tools and develop a portfolio of work over three to four years which is set apart from their existing portfolio of work and often difficult to fully exploit after graduation.

I think this will be increasingly questioned and resisted by individuals paying to study at university. Both students and staff will suffer this disconnect caused by institutions not employing available online technologies and standards rapidly enough. There is a legacy of universities expecting and being expected to provide online tools to staff and students. This was useful and necessary several years ago, but it’s now quite possible for individuals in the UK to study, learn and work apart from any institutional technology provision. For example, Google provides many of these tools and will have a longer relationship with the individual than the university is likely to.

Many students and staff are relinquishing institutional technology ties and an indicator of this is the massive % of students who do not use their university email address (96% in one case study). In the UK, universities are keen to accept mature, work-based and part-time students. For these students, university is just a single part of their lives and should not require the development of a digital identity that mainly serves the institution, rather than the individual.

How would it work?

Students identify themselves with their OpenID, which authenticates against a Shibboleth Service Provider. ((See the JISC Review of OpenID.)) They create, publish and syndicate their course work, privately or publicly using the web services of their choice. Students don’t turn in work for assessment, but rather publish their work for assessment under a CC license of their choice.

It’s basically a PLE project blueprint with an emphasis on identity and data-portability. I’m pretty sure I’m not going to get a fully working model to demonstrate by the end of the course, but I will try to show how existing technologies could be stitched together to achieve what I’m aiming for. Of course, the technologies are not really the issue here, the challenge is showing how this might work in an institutional context.

I think it will be possible to show how it’s technically possible using a single platform such as WordPress which has Facebook Connnect, OAuth, OpenID, Shibboleth and RPX plugins. WordPress is also microformat friendly and profile information can be easily exported in the hCard format. hResume would be ideal for developing an academic profile. The Diso project are leading the way in this area.

Similar projects:

UMW Blogs?

Open Technology:

OpenID, OAuth, RPX, Shibboleth, RSS, Atom, Microformats, XMPP, OPML, AtomPub, XML-RPC + WordPress

Open Content / Licensing:

I’ll look at how Creative Commons licensing may be compatible with our staff and student IP policies.

Open Pedagogy

No idea. This is a new area for me. I’m hoping that the Mozilla/CC Open Education course can point me in the right direction for this. Maybe you have some suggestions, too?

Pimping your ride on the semantic web

Yesterday, I wrote about how I’d marked up my home page to create a semantic profile of myself that is both auto-discoverable and portable. A place where my identity on the web can be aggregated; not a hole I’ve dug for myself, but an identity that reaches out across the web but always leads back home.

While I enjoy polishing my text editor regularly and hand-crafting beautifully formed, structured data, we all know it’s a fool’s game and that the semantic web is about machines doing all the work for us. So here’s a quick and dirty run down of how to pimp your ride on the semantic web with WordPress and a few plugins.

You’ll need a self-hosted WordPress site that allows you to install plugins. I’ve got one on Dreamhost that costs me $6 a month. Next, you’ll want to install some plugins. I’ll explain what they do afterwards. One thing to note here is that I’m using plugins from the official plugin repository whenever possible. It means that you can install them from the WordPress Dashboard and you’ll get automatic updates (and they’re all GPL compatible). In no particular order…

I think that’s quite enough. All but the SIOC plugin are available from the official WordPress plugin repository. Here’s what they provide:

APML: Attention Profile Markup Language

APML (Attention Profiling Mark-up Language) is an XML-based format for capturing a person’s interests and dislikes. APML allows people to share their own personal attention profile in much the same way that OPML allows the exchange of reading lists between news readers.

The plugin creates an XML file like this one that marks up and weighs your WordPress tags as a measure of your interests. It also lists your blogroll/links and any embedded feeds.

Extended Profile

This plugin adds additional fields in your user profile which is encoded with hCard semantic microformat markup and can then be displayed in a page or as a sidebar widget. You can import hCard data, too. There might also be another use for this, too. (see below)

Micro Anywhere

Provides a couple of additional editor functions that allow you to create an hCard or hCalendar events page. Here’s an example.

OpenID

This plugin allows users to login to their local WordPress account using an OpenID, as well as enabling commenters to leave authenticated comments with OpenID. The plugin also includes an OpenID provider, enabling users to login to OpenID-enabled sites using their own personal WordPress account. XRDS-Simple is required for the OpenID Provider and some features of the OpenID Consumer.

This is key to your identity. You can use your blog URL as your OpenID or delegate a third-party service, such as MyOpenID or ClaimID. In fact, you’ve almost certainly got an OpenID already if you have a Yahoo!, Google, MySpace or AIM account. It’s up to you which one you choose to use as your persistent ID. Read more about OpenID here. It’s important and so are the issues it addresses.

XRDS-Simple

This is required to add further functionality to the OpenID plugin. It adds Attribute Exchange (AX) to your OpenID which basically means that certain profile information can be passed to third-party services (less form filling for you!) Like a lot of these plugins, install it and forget about it.

SIOC

Provides auto-discoverable SIOC metadata. “A SIOC profile describes the structure and contents of a weblog in a machine readable form.”

wp-RDFa

Provides an auto-discoverable FOAF (Friend of a Friend) profile, based on the members of your blog. I’ve been in touch with the author of this plugin and suggested that the extended profile information could also be pulled into the FOAF profile. This is largely dependent on the FOAF specification being finalised, but expect this plugin to do more as FOAF develops.

OAI-ORE Map

Provides an auto-discoverable OAI-ORE resource map of your blog. It conforms to version 0.9 of the specification, which recently made it to v1.0, so I imagine it will be updated in the near future. OAI-ORE metadata describes aggregated resources, so instead of seeing your blog post permalink as the single identifier for, say, a collection of text and multimedia, it creates a map of those resources and links them.

LinkedIn hResume

LinkedIn hResume for WordPress grabs the hResume microformat block from your LinkedIn public profile page allowing you to add it to any WordPress page and apply your own styles to it.

I like this plugin because you benefit from all the features of LinkedIn, but can bring your profile home. Ideal for students or anyone who wants to create a portfolio of work and offer their resume/CV on a single site. Depending on the theme you use, it does require some additional styling.

Get_OPML

This is a nice way to create an OPML file of your sidebar links. If, like on my personal blog, your links point to resources related to you, you can easily create an OPML file like this one. There’s a couple of things to note about this plugin though. The instructions mention a Technorati API key. I didn’t bother with this. When you create your links, just scroll down the page to the ‘advanced’ section and add the RSS feed there. Secondly, the plugin author has, for some stupid reason, hard-coded the feed to their own site into the plugin. Assuming you don’t want this spamming your personal OPML file, download a modified version from here or comment out line 101 in get-opml.php. I guess the plugin author thinks that you’ll be using this to import the OPML into a feed reader and from there, you can delete his feed. That’s no good to us though. Finally, you’ll want to make your OPML file auto-discoverable. You can do this by adding a line of html in your header, using the Header-Footer plugin below.

Header-Footer

This simply allows you to add code to the header and footer of your blog. In our case, you can use it to add an auto-discovery link to the header of every page of your blog.


<link rel="outline" type="text/xml+opml" title="ADD YOUR TITLE HERE" href="http://YOUR_BLOG_ADDRESS/opml.xml" />

WP Calais * + tagaroo

These three plugins use the OpenCalais API to examine your blog posts and return a bunch of semantic tags. I’ve written about this in more detail here (towards the end).

The Calais Web Service automatically creates rich semantic metadata for the content you submit – in well under a second. Using natural language processing, machine learning and other methods, Calais analyzes your document and finds the entities within it. But, Calais goes well beyond classic entity identification and returns the facts and events hidden within your text as well.

It’s an easy way to add relevant tags to your content and broadcast your content for indexing by OpenCalais. They place an additional link in your header that lists the tags for web crawlers and, I guess, improves the SEO for your site.

Extra Feed Links

I’ve written about this plugin previously, too. It adds additional autodiscovery links to your blog for author, category and tag feeds. WordPress feed functionality is very powerful and this plugin makes it especially easy to make those feeds visible.

Lifestream

This isn’t a semantic web plugin, but is a powerful way of aggregating all of your activity across the web into a single activity stream. See my example, here. It also produces a single RSS feed from your aggregated activity. Nice 😉

Wrapping things up

If you set all of this up, you’ll have a WordPress site that can act as your primary identity across the web, aggregates much of your activity on the web into a single site and also offers multiple ways for people to discover and read your site. You also get a ‘well-formed’ portfolio that is enriched with semantic markup and links you to the wider online community in a way that you control.

Bear in mind that some of these plugins might not appear to do anything at all. The semantic web is about machines being able to read and link data, right? If you look closely in the source of your home page, you’ll see a few lines that speak volumes about you in machine talk.


<link rel="meta" href="./wp-content/plugins/wp-rdfa/foaf.php"type="application/rdf+xml" title="FOAF"/>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<link rel="meta" type="text/xml" title="APML" href="http://blog.josswinn.org/apml/" />
<link rel="alternate" type="application/rss+xml" title="NoteStream RSS Feed" href="http://blog.josswinn.org/feed/" />
<link rel="resourcemap" type="application/atom+xml" href="http://blog.josswinn.org/wp-content/plugins/oai-ore/rem.php"/>

If you do want a way to view the data, I recommend the following Firefox add-ons

Operator: Auto-discovers any embedded microformats and provides useful ways to search for similar data via third-party services elsewhere on the web.

OPML Reader: Auto-discovers an OPML file if you have one linked in your header. Allows you to either download the file or read it on Grazr.

Semantic Radar: Auto-discovers embedded RDF data. Displays custom icons to indicate the presence of FOAF, SIOC, DOAP and RDFa formats.

The Tabulator Extension: Auto-discovers and provides a table-based display for RDF data on the Semantic Web. Makes RDF data readable to the average person and shows how data are linked together across different sites.

As always, please let me know how this overview could be improved or if you know of other ways to add semantic functionality to your WordPress blog. Thanks.

Addicted to feeds

I’ve been a long time consumer of news feeds and spend a lot of time reading the web via 200+ feeds in Google Reader. More recently, and largely as a result of working on WriteToReply, I’ve become just as addicted to publishing feeds from any data end point I can find.

WordPress makes this quite easy for developers, providing a whole load of functions and template tags for feeds. For the rest of us, there’s also documentation which is useful is you’re wondering what kinds of feeds can be generated from a basic WordPress site.

All the examples below, assume you’re using ‘pretty URLs’. If your URLs are something like http://example.com/?p=123 then the same principles apply but you’ll use the format /?feed=feed_type i.e. http://example.com/?feed=rss2 The documentation shows full examples.

So, here are the basic content feeds. RSS is RSS version 0.92 and RDF is RSS version 1.0, if you were wondering.

http://example.com/feed/
http://example.com/feed/rss/
http://example.com/feed/rss2/
http://example.com/feed/rdf/
http://example.com/feed/atom/

It’s also pretty straightforward to create a feed from a category or tag

http://example.com/category/my_category/feed/
http://example.com/tag/my_tag/feed/

You can also create feeds from combined tags

http://example.com/tag/tag1+tag2+tag3/feed/

And we know that a feed is available for site comments

http://example.com/comments/feed/

and it’s simple to grab a feed of comments from a single post by appending /feed/ to the end of the post permalink.

http://example.com/2009/01/01/my-latest-post/feed

You can also create a feed of a single post itself, by appending '?withoutcomments=1' to the end of the URL

http://example.com/2009/01/01/my-latest-post/feed/?withoutcomments=1

There is a feed for each author of the blog

http://example.com/author/joss/feed

but alas, as far as I know, no feed for the comments by any particular person.

You can also do something fancy with dates

http://example.com/2009/feed
http://example.com/2009/01/feed
http://example.com/2009/01/15/feed

and one of my favourite types of feed is from a search

http://example.com/?s=search_term&feed=rss2

Now all of this is well and good, but how many readers are going to know or care about constructing the various types of feeds available? Fortunately, it’s possible to make many of these feeds auto-discoverable either by adding some simple code to your theme’s header.php or installing a plugin.

By default, two feeds are auto-discoverable on your WordPress site: An atom and rss2 feed of your posts.

By using the Extra Feed Links plugin, you can make your comments, category, tag, author and search feeds autodiscoverable.

It’s also got a useful template tag that allows you to show the feed links in your theme, making the discovery of feeds even easier.  I created a simple widget for the plugin to display the feed and an RSS icon in the sidebar

Here’s the code. Let me know where it could be improved as I just hacked it together from looking at other widgets.

<?php
function widget_extrafeeds_register() {
function widget_extrafeeds($args) {
extract($args);
?>
<br />
<?php echo $before_widget;
echo $before_title;
echo $widget_name;
echo $after_title; ?>
<ul class="sidebarList">
<?php extra_feed_link(); ?> <?php extra_feed_link('http://path/to/your/feed/icon/feed.png'); ?>
</ul>
<?php echo $after_widget; ?>
<?php
}
register_sidebar_widget('Extra Feeds',
'widget_extrafeeds');
register_sidebar_widget('Extra Feeds','widget_extrafeeds');}
add_action('init', widget_extrafeeds_register);
?>

To get this to work with the plugin, you need to add this to the very bottom of the plugin’s main.php file

// widget support
require(dirname(__FILE__) . '/widget.php');

Like I said, if anyone can improve on this, do let me know. Also note that you’ll need to point the URL in the widget to a feed icon. A lot of themes include them in their /images/ directory, which makes it easy.

By using the widget or template tag, you can have these appearing on the relevant pages.

Try it by using http://writetoreply.org/tags 🙂

If you’re interested in how to add category and tag auto-discovery feeds to your theme’s source code, try adding this to your header.php

<?php if (is_category()) { ?>
<link rel="alternate" type="application/atom+xml" title="<?php bloginfo('name'); ?> &amp;raquo; <?php single_cat_title(''); ?> Atom Feed" href="<?php echo
get_category_feed_link(get_query_var('cat'), 'atom'); ?>" />
<?php } ?>

<?php if (is_tag()) { ?>
<link rel="alternate" type="application/atom+xml" title="<?php bloginfo('name'); ?> &amp;raquo; <?php single_tag_title(''); ?> Atom Feed" href="<?php echo
get_tag_feed_link(get_query_var('tag_id'), 'atom'); ?>" />
<?php } ?>

<?php if (is_category()) { ?>
<link rel="alternate" type="application/rss+xml" title="<?php bloginfo('name'); ?> &amp;raquo; <?php single_cat_title(''); ?> RSS2 Feed" href="<?php echo
get_category_feed_link(get_query_var('cat'), 'rss2'); ?>" />
<?php } ?>

<?php if (is_tag()) { ?>
<link rel="alternate" type="application/rss+xml" title="<?php bloginfo('name'); ?> &amp;raquo; <?php single_tag_title(''); ?> RSS2 Feed" href="<?php echo
get_tag_feed_link(get_query_var('tag_id'), 'rss2'); ?>" />
<?php } ?>

<?php if (is_category()) { ?>
<link rel="alternate" type="application/rdf+xml" title="<?php bloginfo('name'); ?> &amp;raquo; <?php single_cat_title(''); ?> as RDF data" href="<?php echo
get_category_feed_link(get_query_var('cat'), 'rdf'); ?>" />
<?php } ?>

<?php if (is_tag()) { ?>
<link rel="alternate" type="application/rdf+xml" title="<?php bloginfo('name'); ?> &amp;raquo; <?php single_tag_title(''); ?> as RDF data" href="<?php echo
get_tag_feed_link(get_query_var('tag_id'), 'rdf'); ?>" />
<?php } ?>

I learned this from the author of this related plugin, which is similar but not quite as powerful as the Extra Feed Links plugin.

Finally, if you use FeedBurner, beware that it breaks some of the above feeds. One fix for ensuring that tag and category feeds continue to work as they should is to modify the FeedBurnerFeedSmith plugin as noted here. Simply change the line

is_feed() &amp;&amp; $feed != 'comments-rss2' &amp;&amp; !is_single() &amp;&amp;

to read

is_feed() &amp;&amp; $feed != 'comments-rss2' &amp;&amp; !is_single() &amp;&amp; !is_tag() &amp;&amp;

That’ll do for now. I intend to learn more about the RSS and Atom specifications over the next few weeks and will post anything I think relevant here. If you can add anything to this post, please do leave a comment. Thanks.

CommentPress

CommentPress is, for educators, one of the most important developments to come out of the WordPress community and one of the most significant innovations that I know of in online publishing. I first learned about it when I saw that Yale University Press were using it to invite comment on Yochai Benkler’s book, The Wealth of Networks. In its original form, CommentPress is a theme for WordPress that allows readers to comment on, annotate and discuss paragraphs of text. In fact, although installed as a theme, it transforms a site not only by design, but with functionality you’d normally expect from plugins. In CommentPress v1.x, form and function came as a single package. It’s worth reading about the background to CommentPress. You’ll see that it’s part of a larger course of research by the Institute for the Future of the Book.

Institute for the Future of the Book was founded in 2004 to [… stimulate] a broad rethinking—in publishing, academia and the world at large—of books as networked objects. CommentPress is a happy byproduct of this process, the result of a series of “networked book” experiments run by the Institute in 2006-7. The goal of these was to see whether a popular net-native publishing form, the blog, which, most would agree, is very good at covering the present moment in pithy, conversational bursts but lousy at handling larger, slow-developing works requiring more than chronological organization—whether this form might be refashioned to enable social interaction around long-form texts… We can imagine a number of possibilities: scholarly contexts: working papers, conferences, annotation projects, journals, collaborative glosses; educational: virtual classroom discussion around readings, study groups; journalism/public advocacy/networked democracy: social assessment and public dissection of government or corporate documents, cutting through opaque language and spin (like the Iraq Study Group Report, a presidential speech, the federal budget, a Walmart or Google press release); creative writing: workshopping story drafts, collaborative storytelling; recreational: social reading, book clubs.

You can also read about CommentPress in The Chronicle for Higher Education and The Journal of Electronic Publishing.

We have started to use CommentPress at the University of Lincoln for the discussion of internal documents and feedback from staff has been good. Many are astonished at what it makes possible. A departmental research strategy paper received over 100 comments from nine staff; something we’d never have had by emailing the document out for comment. Of course, I am keen to use it to support courses and a colleague and I have recently applied for funding to use CommentPress in a course with over 100 Criminology students, who are normally asked to critique texts and respond by emailing Word documents to their tutor. Using CommentPress allows for transparent and open, formative feedback and assessment by both staff and student peers.

Outside of my work for the university, I’ve been developing WriteToReply, with Tony Hirst from the Open University. You can read about how we started WriteToReply and you’ll see that CommentPress is fundamental to what we’re trying to achieve and we’re using it for networked democracy, as suggested above. CommentPress is in fact, a comment engine for each document site. Two things make this possible. First, and most obvious, is the fact that readers on a document site can direct comments to specific paragraphs of text. Readers can also respond to other readers’ comments and a happy by-product of our re-publication of the Digital Britain – Interim Report, is that the discussion still continues, despite the consultation period being over. So CommentPress is an engine for on-site comment and discussion. Texts are dissected but remain whole; they also become social objects.

The second important contribution CommentPress has made is the provision of permalinks for each paragraph in the text. This provides a unique URI or URL for each paragraph of text, making linked references from third-party web sites possible. Combined with the trackback/pingback system built into decent web publishing platforms, CommentPress makes remote commenting on text possible, as Tony explains on his blog.

What this means is that the paragraph, action point, section or whatever can become a linked resource, or linked context, and can support remote commenting. And in turn, the remark made on the third party site can become a linked annotation to the corresponding part of the original report… How? Well through the judicious use of trackbacks… So even if you don’t want to comment on the Digital Britain Interim report on the WriteToReply site, but you do care, why not post your thoughts on your own blog, and link your thoughts directly back to the appropriate part of the report on WriteToReply?

It’s this feature, so easily missed, which makes CommentPress a comment engine. An engine suggests an underlying technology that drives something greater. By introducing paragraph permalinks, text can now be linked at a much more accurate and deeper level than was previous possible. Texts are transformed into uniquely identifiable resources of data. Academics can now reference paragraphs rather than page numbers and readers can reflect, comment and participate in the analysis of texts from their own site. For the reader, CommentPress provides a fluid interface to the document as a whole but at a technical level, explodes it across the Internet.

In the running of WriteToReply, we’ve tested CommentPress quite hard and found it to be a complex and fragile tool. Until recently, it hasn’t been updated to reflect the fast changing development of WordPress and because of its extensive use of Javascript, it clashes with other plugins, so while it transforms a WordPress site, it also restricts functionality otherwise possible. Fortunately, CommentPress 2 is being actively worked on and I’ve been helping to test it with Eddie Tejeda, the original developer. It’s currently in beta, but Eddie is responding to my feedback and fixing issues rapidly. There is a mailing list for CommentPress and the code is publicly accessible.

CommentPress 2.2 Beta
CommentPress 2.2 Beta

If you test CommentPress 2, you’ll immediately see that it’s been split into a suite of plugins and themes and that it’s now much more flexible in terms of compatibility with other WordPress plugins and in being able to select different components, options and themes.  Notably, paragraph permalinks are available as a separate plugin, which means that any WordPress blog will be able to have paragraph-level URIs, without necessarily supporting paragraph level commenting. My test site is on WriteToReply. Feel free to have a look and post comments, if you wish. As I write, it’s not quite ready for everyday use, but at the speed which Eddie has been working over the last few days, I’m confident that I’ll be able to use it here at the university and on WriteToReply before the month’s out. If you’re used to using v1.4.1, you’ll notice a lot of change. Remember that it’s still beta software and that not all of the features have been fully implemented yet. It would be great if other people could help test it across various browsers and with different documents. Multimedia is not something I’ve yet been able to throw at it, for example.

Finally, CommentPress needs continued support in terms of testing, reporting issues, bug fxes and feature development. This can be done voluntarily, but given it’s potential to support education, business and government consultations, I for one, will be looking for ways to raise funding to help support all of this. If you know of any possible funding opportunities within UK Higher Education, please do let me know.