Think you know how to use Google Search?

Following yesterday’s post about Google’s 15 second search tips, I thought it would be pretty easy to pull together and develop an on-going series of short tutorials on how to use Google’s search engine. I was also motivated to do this because, co-incidentally, at the Teaching and Learning Symposium yesterday, I attended an elective where the question was asked by a secondary school teacher, “tell me what I can do to help develop my students’ IT skills for when they attend your university.” One of the answers he got was “teach them how to use Google search.”

A lot of time can be spent by both new students and staff, on achieving a basic level of digital literacy. Google’s search engine is a powerful tool disguised by a very simple interface which many of us don’t use to full effect. New features are being added rapidly, too, so I thought a blog which brought together tutorials made by Google and, in time, made by me, might be a useful resource for both staff and students. It’ll also give me the opportunity to learn more about Google’s search engine, which I’m sure I don’t always use to full effect either.

Think you know how to use Google Search? Google Search Tutorials

Facebook’s transparent use of OpenID

There was a bit of excitement last month when Facebook became an OpenID relaying party. Many of the big names such as Yahoo!, Google, MySpace, etc. have long been providers of OpenIDs to their users, but Facebook is now accepting third-party OpenIDs as a way to login to their site. What’s even more unusual and why I’m writing this post is that it wasn’t until a couple of days ago that I noticed how they’d implemented OpenID:

Existing and new users can now link their Facebook accounts with their Gmail accounts or with accounts from those OpenID providers that support automatic login. Once a user links his or her account with a Gmail address or an OpenID URL, logs in to that account, then goes to Facebook, that user will already be logged in to Facebook.

I don’t think this brief explanation on the Facebook developer blog does justice to how this works in practice. What it means is that if you link your Facebook account to your OpenID, you will automatically be logged in to Facebook if you are logged into your OpenID provider and visit http://facebook.com On any other OpenID enabled site, you click a button or type your OpenID into a login box and are then logged in to the site you’re visiting.

With Facebook, they’ve done away with the need to enter your OpenID credentials altogether. If you’re logged in to your OpenID provider, pause for three seconds on http://facebook.com and you’ll be automatically logged in. If you log out of Facebook and then visit http://facebook.com, you’ll be automatically logged in again. It doesn’t seem to work if you visit any other Facebook URL.

So, for example, if you link your Google account to your Facebook account and, like many of us, are logged in to Google throughout the day using GMail, Google Reader, Google Docs, Google Calendar, or whatever, you never have to think about logging in to Facebook. It’s the closest to a transparent single-sign-on across consumer/social sites that I can think of.

I exchanged a few comments about this with Paul Walk on Twitter, who is less impressed by this than I am. What if you want to log out of Facebook? Really log out? You’d need to log out of your OpenID provider. What if you want to stay logged in to your OpenID provider but log out of Facebook? Why would you want to do that? For most users, I can’t think of a reason. Occasionally I want to log out of a site and ensure I’m completely logged out because I’m testing something. When that happens, I open a different browser, clear cookies and/or use the private browsing mode in Safari or Chrome. The benefits to Facebook’s approach seem to outweigh the occasions when I’d want to do this.

Other than habit, can you think of a reason why you’d want to log out of Facebook but remain logged into your OpenID provider?

Surely what’s important here is whether you are logged in to the world-wide-web or logged out of the world-wide-web. It would be more secure, surely, to know if you were logged in or logged out rather than whether you were logged in to some sites and logged out of others. If I lock my front door, I know that every room in my house has been secured. I don’t need to lock every room in the house, too. When I unlock my front door, I have the freedom to move around my house. And so do guests. This is where single-sign-on becomes potentially dangerous, because it opens up multiple services that have been otherwise protected by multiple authentication credentials. If someone else uses your browser, they have access to all your accounts. That could be useful when you and your partner share accounts on some websites, but dangerous if you leave your PC unattended or have your laptop stolen from a public library.

However, I imagine that most people on the web are using one or two weak passwords across the web services they use because they can’t remember multiple login details. Surely one good password to protect multiple accounts which is used to log in and out of multiple services is better than one or two weak passwords that are used everywhere? If I have one ‘key’ that works everywhere, I’m more likely to get into the habit of using it than I am if I have to remember to log out of multiple sites.

I know of three important blog posts about Facebook’s use of OpenID, two from Luke Shepard, the principle developer of OpenID on Facebook and another from Simon Willison. A month before Facebook implemented their ‘linked accounts’ feature, Luke Shepard was discussing some ideas about OpenID login on his blog. Now that OpenID login to Facebook has been implemented, he’s been discussing the logout process. Following on from these two posts, Simon Willison provides a key overview to the current implementation in light of the new Facebook username feature:

At any rate, their consumer implementation is fascinating. It’s live right now, even though there’s no OpenID login box anywhere to be seen on the site. Instead, Facebook take advantage of the little known checkid_immediate mode. Once you’ve associated your OpenID with your Facebook account (using the “Linked Accounts” section of the settings pane) Facebook sets a cookie remembering your OpenID provider, which persists even after you log out of Facebook. When you later visit the Facebook homepage, a checkid_immediate request is silently sent to your provider, logging you in automatically if you are already authenticated there.

It’s brilliant (well, I think so), to see how a seemingly minor part of the OpenID specification, can be turned into a significant improvement (well, I think so), to the user experience and signals the way for a transparent single-sign-on experience across the web, using an OpenID provider of your choice. I look forward to the day when I login to my OpenID provider (actually, my browser does that automatically when I start it up), and from then on, I’m transparently logged in to the sites I use across the web, until I log out of my OpenID provider. One day, I’ll log in to my browser and be logged in to all the web services I use. One day, I’ll log out of my browser and be logged out of all the web services I use.

The Wire. Linking aggregated posts and comments

Philip Schmidt has developed a way to aggregate both posts and comments inline. Read more about it on Philip’s blog. Jim Groom’s posted about it, too. I have to run to a meeting soon, but I just wanted to show how you can take the Yahoo Pipes output and run it through feed2js to embed The Wire in just about any web page. Nice.

Using Google Reader as an OPML editor and feed blender

Last week, Google announced a new feature for Google Reader that is worth noting here, if only because it will make my work a little easier. They’ve introduced the idea of ‘bundles’ of feeds that anyone can create and share via Google Reader, email, OPML or as an Atom feed. There was a bit of confusion at first about what happened after you create a bundle and shared it. Dave Winer, based on an exchange with Kevin Marks, thought that the bundles were dynamic ‘reading lists’ based on a proprietary format. This isn’t the case but it’s worth reading Dave Winer’s original post with comments, and his follow up post which clarifies what reading lists are (technically, they’re ‘subscription lists‘ – part of the OPML 2.0 specification).

Anyway, what Google has introduced in this update to their feed reader is still very useful and functionally quite similar to the reading list concept. It allows me to group multiple feeds into a set/reading list/bundle and then share that set of feeds with my Google Reader ‘friends’, email a link to a web page of that bundle or download an OPML file of the bundle. This last feature is particularly cool because it means your bundle is portable via the OPML open standard and can be shared beyond Google Reader.

Build your bundle
Ways to share it
Email a link
Read it, subscribe to the feed or download the OPML

Effectively, Google Reader has become a simple OPML editor, allowing anyone to gather feeds together and export them as an OPML file. Even better, your bundle is also available as a ‘blended’ Atom feed, achieving something similar to Dave Winer’s notion of a dynamic ‘reading list’ where the creator of the bundle can add or remove feeds and the Atom feed is dynamically updated to reflect those changes. Until now it was a bit of a hassle to create a blended feed from multiple sources. Yahoo! Pipes is a powerful way of doing it but Pipes isn’t for everyone and I’ve found the feeds it produces are not always available and compatible with other feed reading applications. Recently, I’ve been creating ‘digests’ in feed.informer, but I’m more inclined to use Google Reader now as it’s where I do all my feed reading and I know the application well. Note that you don’t have to remain subscribed to the feed in Google Reader in order for a bundle to remain persistent either. You can create a bundle from feeds you later unsubscribe from in your reader and the feeds are not deleted from the bundle.

There are two obvious uses for all of this. First, a teacher could bundle a reading list of feeds and share them with students via Google Reader, as a simple web page, an OPML file or dynamic Atom feed. Second, using the Atom output, it’s now easy for anyone to create a lifestream feed of all their activity on the web and embed it on their web page or just archive it in Google Reader or elsewhere.

Scriblio, Triplify and XMPP PubSub

It occured to me this morning, as I woke from my slumber, that the work I’ve been doing recently with WordPress, could also be applied to a library catalogue using Scriblio.

Scriblio (formerly WPopac) is an award winning, free, open source CMS and OPAC with faceted searching and browsing features based on WordPress. Scriblio is a project of Plymouth State University, supported in part by the Andrew W. Mellon Foundation.

Which means that you can import your library catalogue into WordPress and the user can search for and retrieve a record for The Films of Jean-Luc Goddard. Have a look around Plymouth State’s Scriblio and you’ll get a good feel for what’s possible.

Anyway, taking Scriblio’s functionality for granted, you could easily add Triplify to the mix as I have discussed before. So with very little effort, you can convert your library catalogue to RDF N-Triples (and/or JSON). My questions to you Librarians is: knowing this is possible and fairly trivial to do, is there any value to you in exposing your OPACs in this way?

Next, as I lay listening to my daughter chat to her squeaky duck, I thought about the other stuff I’ve been looking at recently with WordPress.  Once you think of your library catalogue as a WordPress site, there’s quite a lot of fun to be had.  You could ramp up the feeds that you offer from your OPAC, use the OpenCalais API to add semantic tags, plugin some more semantic addons if you wish (autodiscovery of SIOC, FOAF, OAI-ORE data??), and, perhaps most fun of all, publish OPAC records in realtime over XMPP PubSub.

Which brings me to JISCPress, our recent #jiscri project proposal, which we may or may not get funded (what are we, a week or two away from finding out??).  In that Project, we’re proposing a WordPress MU platform for publishing and discussing JISC funding calls and project reports (among other things).  There’s a lot of cross-over between the above Scriblio ideas and JISCPress. So much so, that it’s probably no more than a days work to transform the JISCPress platform, hosted as an Amazon Machine Image, to a multi-user OPAC platform where, potentially, all UK University libraries, publish their OPACs via separate Scriblio sites.

You could then, like wordpress.com has done, publish an XMPP firehose from every catalogue over PubSub for search engines or whoever is interested in realtime data from UK university library catalogues. Alternatively, instead of the WPMU set up, each University library could maintain their own Scriblio install and publish an XMPP feed to an agreed server (though that approach seems like more hassle than is necesary if you ask me. You’re bound to have some libraries falling behind and not upgrading their sites as things develop. For less than a collective £4K/year, we could all buy into commercial support for a WPMU site from Automattic to help maintain server-side stuff).

I dunno. Maybe this is all off the wall, but the building blocks are all there. Is anyone experimenting with Scriblio in this way? Don’t tell me, a bunch of you have been doing it for years…

The user is in control

Just a quick nod to Andy Powell’s post yesterday about Identity in a Web 2.0 World. As I’ve said before, I’m trying to catch up with the issues Andy discusses and develop them into a blueprint for the Mozilla/Creative Commons/P2P University Open Education course, I am participating in.

Andy writes:

…identity in a Web 2.0 world is not institution-centric, as manifest in the current UK Federation, nor is it based on the currently deployed education-specific identity and access management technologies.  Identity in a Web 2.0 world is user-centric – that means the user is in control…. The important point is that learners (and staff) will come into institutions with an existing identity, they will increasingly expect to use that identity while they are there (particularly in their use of services ‘outside’ the institution) and that they will continue using it after they have left.  As a community, we therefore have to understand what impact that has on our provision of services and the way we support learning and research.

I am therefore reassured that my blueprint outline is not completely off the wall:

University students are at least 18 years old and have spent many years unconsciously accumulating or deliberately developing a digital identity. When people enter university they are expected to accept a new digital identity, one which may rarely acknowledge and easily exploit their preceding experience and productivity. Students are given a new email address, a university ID, expected to submit course work using new, institutionally unique tools and develop a portfolio of work over three to four years which is set apart from their existing portfolio of work and often difficult to fully exploit after graduation. I think this will be increasingly questioned and resisted by individuals paying to study at university.

My proposal is to show there are existing technical solutions which would allow an individual to register as a student at a university, provide the institution with their Facebook, Google, Yahoo!, OpenID, etc. identification and from then on, the student uses their existing ID to authenticate against any university online resource. There’s an example of how this might happen in the JISC Review of OpenID, which describes one of the project aims as the development of

bridging software that will allow OpenIDs from any source to be used as identities within the production UK (SAML) federation.

The University of Kent host a demonstrator of this OpenID-to-Shibboleth bridge.

The other aspect of my blueprint is institutional support of a Personal Learning Environment (PLE). I am suggesting that the WordPress Multi User platform is one technology that could support the characteristics of a PLE, being: ((Taken from, Personal Learning Environments: Challenging the dominant design of educational systems. Scott Wilson, Prof. Oleg Liber, Mark Johnson, Phil Beauvoir, Paul Sharples & Colin Milligan, University of Bolton. 2006))

  • Focus on coordinating connections between the user and services
  • Symmetric relationships
  • Individualized context
  • Open Internet standards and lightweight proprietary APIs
  • Open content and remix culture
  • Personal and global scope

The PLE implementation which I have in mind is not, like the VLE, a monolithic system but rather a platform which aggregates and co-ordinates external user-centric services into a coherent learning environment. It is a parasitic system, feeding off content from existing online services such as blogs, social bookmarking, wikis and social networks, but also a rewarding environment which supports and develops the student’s existing portfolio ((In many ways, I am thinking of ‘Identity’ and ‘Portfolio’ as being largely synonymous during the student’s period of study.)) throughout their period of study.

I’ve shown how WordPress can aggregate and archive course activity, how it can enhance the discovery and connectivity of an individual’s and institution’s online profile through the addition of semantic-web-enabling plugins, how it can syndicate filtered content to other internal and external systems (through the use of feed2js, it can also syndicate content to legacy systems like Blackboard, which don’t support embedded web feeds). I’ve also shown that it can support a lightweight social network that integrates with an institution’s LDAP/Active Directory authentication system, and that social network can be OpenID enabled, allowing users to optionally link their OpenID to their WordPress/LDAP account and login via OpenID instead. ((I’ve tested this with DiSo’s OpenID plugin, which works in principle, but I suspect that once set up, the OpenID login for the specified account, completely bypasses the LDAP authentication. Surely just a small amount of development would provide tighter integration. Incidentally, a Shibboleth plugin (by the same author of the OpenID plugin) for WordPress also exists.))

Finally, the institutional and wider benefits to the public can be found when the cumulative data of the platform is itself aggregated into a structured site that enables discovery and re-use of content. An example of this is our Community Posts site, and I have also previously discussed the potential development and exploitation of this resource. Designed and licensed carefully, such a site could provide open educational resources at both user and programmatic levels.

So what empowers the user/student and puts them in control? Data-Portability and Creative Commons licensing? ((Actually, I’m starting to think that CC licensing is little more than an interim step to a better understanding of ‘data’. See ‘You don’t nor need to own your data‘ When knowledge is transmitted online, every aspect of its representation is in a form of data. Both information and instruction become ‘data’ – isn’t it backwards to think of knowledge in terms of something ‘owned’ Do you think of instructional methods as ‘yours’?)).

LaTeX support in WordPress

My recent proposal to do a workshop session on WordPress MU and BuddyPress at this year’s ALT conference was accepted on the condition of a couple of modifications. It’s been suggested that it should be run as a demonstration rather than workshop and that I offer more detail on what will be demonstrated. Fair enough. The reviewer of my proposal suggested that I might aim the session at “teachers of mathematics-intensive disciplines because of WordPress’ decent support of \textrm{\LaTeX{}} for processing mathematical formulae.”

This isn’t an area I would normally think to support (although I did write my MA dissertation in \textrm{\LaTeX{}}, using LyX – it produces beautiful typeset text, regardless of whether you use it for science-related work). Anyway, a quick search showed that indeed, WordPress has supported \textrm{\LaTeX{}}, on both wordpress.com and as a plugin for a couple of years. You can adjust the size and style of the output and enable it for comments, which, if discussing mathematical formulae with peers, could be of huge benefit.

Maxwell’s Equations

\nabla \cdot \mathbf{D} = \rho_f

\nabla \cdot \mathbf{B} = 0

\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t}

\nabla \times \mathbf{H} = \mathbf{J}_f + \frac{\partial \mathbf{D}} {\partial t}