Hacking as critique: In and against

A selected literature review

As I mentioned in my first post of this series, most histories of Hacking begin at MIT in 1961 and make only cursory mention of anything prior to this date. I am interested in what the institutional, political and social conditions were, which gave birth to hacking at that particular time and place. Why MIT? Why 1961? In this series of posts (notes for a journal article), I am focusing on the role of ‘the university’ (i.e. institutionalised academia) in the development of hacker culture. Previously, I suggested that we can take Richard Stallman’s departure from MIT in 1984 as the moment hacker culture became independent from its academic origins and so for two decades, hackers were very much (although not exclusively), part of academic culture and dependent on and subject to the conditions of their institutions. In my last post, I focused on the commercialisation of scientific research and the gradual trend, over many decades, of US universities to valorise their research activity often at the encouragement of government funding agencies. This process took place over a long period as academics and their institutions shifted from an ethos of “communism” or the “communal character of science” (Merton, 1973) to a more entrepreneurial approach to science (Etzkowitz 1998, 2000a, 2000b, 2001, 2002, 2003).

Periods in history do not have clean start and end dates. The conditions which gave rise to moments like the arrival at MIT of the PDP-1 computer (1961) or the departure of Richard Stallman (1984) are, in my view, more important than the mythic “heroes” and “wizards” and “real programmers” if we want to understand why movements and sub-cultures came to exist, why they may have died, and how we can ensure their longevity. Rosenzweig, (1998) provides a useful review of four different approaches to writing the history of the Internet: biographic, bureaucratic, ideological, and social, arguing that

the full story will only be told with a fully contextualised social and cultural history. The rise of the Net needs to be rooted in the 1960s – in both the “closed world” or the Cold War and the open and decentralised world of the antiwar movement and the counterculture. Understanding these dual origins enables us to better understand current controversies over whether the Internet will be “open” or “closed” – over whether the New will foster democratic dialogue or centralised hierarchy, community of capitalism, or some mixture of both (Rosenzweig, 1998, 1531).

Although writing about the history of the Internet and not specifically about hacker culture, the same point can still be made. In my first post, I listed a number of books and articles which discuss hackers and hacking in different ways. Here, I reflect on five of them.

Stephen Levy’s (1984) Hackers. Heroes of the Computer Revolution takes the biographical approach. It is the classic text on hackers and the the only attempt to develop a coherent (albeit brief) history. Its weakness is that it is a journalistic account of those ‘heroes’, making only cursory mention of the institutional, economic and political conditions they were working in. Nevertheless, it is a fascinating account of the motivations of the individuals involved and includes an epilogue which describes the events surrounding the commercialisation of the AI Lab’s Lisp Machines and consequently Stallman’s departure from MIT.

Himanen’s (2001) The Hacker Ethic takes a sociological approach, examining the work of hackers and their values in light of the Protestant work ethic. It is a useful attempt to develop Levy’s chapter on the Hacker Ethic and makes a clear connection between hacker cultures and scientific research culture within academia. However, his description of that academic culture remains inadequate and draws on Merton’s idealised account of the ‘scientific ethos’, which I mentioned in my previous post. As I have already discussed, the outcomes of scientific research have been the objects of proprietary control (patents and licensing), property (copyright) and valorisation since the early twentieth century in the USA. It is the achievement of hackers like Richard Stallman, who subverted these controls with the development of the General Public License, that distinguishes hackers from the scientific culture they grew out of and more recently is forcing the scientific community to re-evaluate the value of “the communal character of science”, as can be seen in the growth of the Open Access movement and recent ‘Science as an open enterprise‘ report.

Tim Jordan’s 2008 book, Hacking, is a short, general introduction to hacker and cracker culture and provides an insightful and useful discussion around hacking and technological determinism. Like Himanen, Tim Jordan is also a sociologist and presents a positive account of hacking as a social and political project. The weakness of Jordan’s book is that is draws largely on literature written by hackers themselves and as such presents them as heroic “warriors” and “hacktivists”, in the same tradition as Levy and Himanen. What makes Jordan’s book particularly valuable is his argument that “hacking both refutes and demands technological determinism”. That is, hackers both promote the idea of technological determinism and provide a critique of that view.

To me, this suggests that hackers work both in and against a society that appears to be determined by technology but provide an example of how that often overwhelming feeling can be challenged and subverted. From this position, hacker culture can be seen as one of the most successful counter-culture movements in recent history, yet one which continues to struggle within a liberal, capitalist world view, dominated by money/value, property and the legal system.

In a similar way, E. Gabriella Coleman’s book, Coding Freedom. The Ethics and Aesthetics of Hacking (2012) is especially useful in identifying hackers and hacking as a liberal critique of liberalism. Coleman’s book is an anthropological study of hackers, in particular the free software hackers of the Debian Linux/GNU operating system and points towards a methodological approach of examining hacker culture and other counter-cultures that are ‘in and against’ a dominant discourse. One particular instrument that hackers employ is Stallman’s ‘copyleft‘ GPL license, which uses the existing law of copyright against itself. Similarly, Creative Commons and the Free Culture movement extend this approach beyond the software domain to all cultural artefacts. By examining the hacker culture in this way, we can reveal its limits and the opportunities that the movement presents within liberal capitalist society.

Johan Soderberg’s (2008) Hacking Capitalism is a study of hacking as a political project. In the first chapter, Soderberg offers a ‘background of the hacker movement’ but only briefly mentions the ‘pre-history’ which I am concerned with. He rightly mentions the development of the telephone infrastructure, Norbert Wiener’s theory of Cybernetics and its application in war-time funded research projects, which would eventually go on to develop the Internet. He also identifies the anti-war and appropriate technologies movements as examples of how  personal computing grew out of 1960s counter-culture (Turner and Markoff provide full accounts of this). However, much of Soderberg’s book is an examination of hacking using the categories of Marx’s critique of political economy (class, value, labour, commodities, etc.). In doing so, it is the only book-length study of hacking which attempts to methodologically examine hacking from the point of view of a critique of liberalism, rather than starting from a naturalised liberal understanding of categories such as property, work, production and exchange. For this reason, it is an important book (in need of a good editor!).

This very brief survey of five key books about hacker culture demonstrates that Rosenzweig’s remark about histories of the Internet can equally be applied to hacking. Taken together, they reveal that in addition to the substantive body of biographical, social and institutional history, the history of hacking can be approached methodologically in two critically different ways: The first (embodied in Levy and Himanen’s books) offers a view of hacker culture from a liberal perspective. Despite being mischievous, playful and meritocratic it’s ethic is grounded in laissez-faire liberal ideals of property, markets and freedom. The conclusion to Jordan’s book offers a methodological bridge to that which Coleman develops more broadly and Soderberg develops more fully. That is, a study of hacker culture can reveal to us an immanent critique of liberal capitalism: it is a culture that is both in and against; it is complicit but points to a way out through the development of intellectual and practical tools such as Copyleft and the sharing and co-production of open source software. The development of this more critical approach to the study of hackers and hacking is overdue and should result in a much stronger defence of free software and hacker culture as it is increasingly incorporated and subsumed into neo-liberal policy and methods of valorisation.

My next post in this series will be about hackers and war.

Critical Open Education Studies

I don’t know what to make of David Wiley’s latest blog post ‘MOOCs and Digital Diploma Mills: Forgetting Our History‘. I am astonished, to put it politely, that one of the leading thinkers in the Open Education movement is still sitting on the fence, despite having written about the proletarianisation of teaching in his early work. Of course David Noble’s critique of distance education (later expanded in a book) is applicable to open education. Noble’s concern about “the systematic conversion of intellectual activity into intellectual capital and, hence, intellectual property” is not remedied with the simple application of a Creative Commons license – if only it were that easy. Many academics are already free to choose how to distribute their scholarly work (this simplifies the traditional transferral of copyright to the journal publisher resulting in more effective impact of the institution’s outputs), but what Noble was concerned with was the systematic interference by institutions in the re-production of teaching and learning, which is what xMOOCs are undertaking. xMOOCs are capturing value in teaching and learning where it was previously shared at the whim and will of the individual teacher. In choosing to license content under a CC license, such institutions are converting an under-commons into a valorised commons.

David Noble died in 2010 and did not revise his views about distant education in light of the open education movement. I suggest that this is because his argument remains apposite for OER-based teaching and learning, too. The content may be ‘free’, but the teacher is drawn further into the valorisation process of the institution.

As can be seen by David Wiley’s significant number of articles relating to open education, the movement has had over a decade to reflect critically on itself, yet there remains a void of reflexive, critical work that attempts to develop the open education movement and protect it from threats such as those which Wiley draws out of Noble’s article. There is no doubt that the work of David Wiley and others to advocate open education and grow the movement is a sincere and important contribution to a notion of the ‘public good’, but the movement still remains largely inward looking and self-referential. It is dominated by learning developers and technologists who are necessarily focused on implementation and have little time, motivation or opportunity for critique.

Where are the scholarly papers that examine open education from the range of disciplines within the social sciences? What has open education demonstrably learned from the tradition of popular and critical pedagogy? How have the critiques of open source been applied to OER? Similarly, the movement has much to learn from critiques of P2P, but where is this critical, scholarly dialogue occurring?

In the UK, the OER movement has been tightly coupled to state (JISC/HEA) funding, which has now ended. I was the recipient of two grants in this programme of funding. The synthesis evaluation of #ukoer clearly presents the instrumental agenda of the programme. Related conferences are mostly one project after another attempting to demonstrate their ‘progress’ with robust critique almost entirely absent. My experience at Open Education 2010 was the same. Academics seeking funding are naturally keen to satisfy the expectations of their funders and the effect that these funding programmes have had on the fundamental direction of open education in the UK has yet to be critically examined. What would open education look like if we hadn’t taken the money? Similarly, in the US, state funding and, to a greater extent, philanthropic funding has steered how the movement has developed. Funding is provided based on the premise of open education’s public good and we feel obliged to demonstrate this. There is a history of state funding influencing the outputs of academia, what effect has it had on us?

Clearly if this work is being undertaken, I have not found it, and so I am hoping that others will join me in reviewing the existing critical literature so that we might identify what has been written and therefore what needs to be done. I have made three contributions in the last two years. The first paper, with Richard Hall, addressed the potential role of open education to sustain higher learning. The second, was a critique of the valorisation of institutional OER. The third, was a paper with Mike Neary, which critiqued the idea of ‘commons’. While I am trying to develop a critique of open education within the framework of a critique of political economy, I know that approaches from other disciplines within the social sciences will prove insightful and fruitful, too. My next paper will be a critical examination of the historical role of academia in the development of hacker culture (and therefore notions of ‘freedom’ and ‘openness’ that have returned to the university via the success of Creative Commons). I think much remains to be done to uncover the historical forces, structures and conditions that gave rise to open education. Without this, how can we understand ourselves?

I have looked for literature reviews of open education and found little that satisfies my requirements for texts that are critiques of open education. For example, Mendeley groups point to the usual plethora of blog posts, news articles, reports and project outcomes. A Google Scholar search is not encouraging either. In the apparent absence of ‘critical open education studies’, I hope that you will recommend papers that offer David and I robust critiques of open education, OER and related practices. I think that the development of this area of scholarship would demonstrate the maturity of the movement and protect it from manipulation, co-option and coercion in the future.

The role of the university in the development of hacker culture

PDP-10
A PDP-10 computer from the 1970s.

The picture above is of a PDP-10 computer similar to that found in universities during the 1970s. The PDP-10 was developed between 1968-1983 by Digital Equipment Corporation (DEC) and is a useful point of reference for looking backwards and forwards through the history of hacking.  The PDP-10 (and its predecessor the PDP-6) were two of the first ‘time-sharing‘ computers, which among other significant technological developments, increased access to computers at MIT. Hackers working in the MIT Artifical Intelligence Lab (AI Lab) wrote their own operating system for the PDP-6 and PDP-10 called ITS, the Incompatible Timesharing System, to replace the Compatible Time Sharing System (CTSS) developed by MIT’s Computation Centre. Richard Stallman, who Levy describes as “the last true hacker”, was a graduate student and then AI Lab staff system hacker who devoted his time to improving ITS and writing applications to run on the computer. Stallman describes the Lab during the 13 years he worked there as “like the Garden of Eden”, a paradise where a community of hackers shared their work.

However, this period came to a bitter end in 1981, when most of the hackers working with Stallman left to join two companies spun off from the Lab. Four left to join Lisp Machines, Inc. (LMI), led by Stallman’s mentor, the ‘hacker’s hacker’, Richard Greenblatt, while 14 of the AI Lab staff left to join Symbolics, Inc. a company led by Russell Noftsker, who was Head of the AI Lab for eight years and had hired Stallman. (Noftsker had taken over from the original Director, Marvin Minsky, who worked on MIT’s DARPA-funded Project MAC, which later became the AI Lab). For a while in 1979, Noftsker and Greenblatt discussed setting up a company together that sold Lisp Machines, but they disagreed on how to initially fund the business. Greenblatt wanted to rely on reinvesting early customer orders and retain full control over the company while Noftsker was keen to use a larger amount of venture capital, accepting that some control of the company would be given up to the investors. Greenblatt and Noftsker couldn’t agree and so set up companies independent of each other, attracting most of the ITS hackers in the AI Lab to the extent that Stallman’s beloved community collapsed. With maintenance and development of ITS decimated, administrators of the AI Lab decided to switch to TOPS-20, DEC’s proprietary operating system when a new PDP-10 was purchased in 1982. A year later, DEC ceased production of the PDP-10 which Stallman described as “the last nail in the coffin of ITS; 15 years of work went up in smoke.”

Lisp Machines went bankrupt in 1985 while Symbolics remained active until the end of the Cold War when the military’s appetite for AI technologies slowed down and subsequently the company declined. One more thing worth noting about these two AI Lab spin-offs is that within a year of doing business, Stallman and Symbolics clashed over matters of sharing code. Having been deserted by his fellow hackers, Stallman made efforts to ensure that everyone continued to benefit from Symbolics enhancements to the Lisp Machine code, regularly merging Symbolics code with MIT’s version, which Greenblatt’s company used. Stallman was acting as a middle-man between the two code bases and the split community of hackers. Like other MIT customers, Symbolics licensed the Lisp Machine code from MIT and began to insist that their changes to the source code could not be redistributed beyond MIT, thereby cutting off Greenblatt’s Lisp Machines, Inc. and other MIT customers. Stallman’s efforts to keep the old AI Lab hacker community together through the sharing of distributed code came to an end.

In an essay by Stallman, he writes about how this was a defining moment in his life from which he resolved to start the GNU Project and write his own ‘free’ operating system. In 1984, Stallman left his job at MIT so as to ensure that the university didn’t have any claim to the copyright of his work, however he remained as a guest of the AI Lab at the invitation of the Director, Patrick Winston, and still does so today. If you are at all familiar with the history of free software and the open source movement, you will know that Stallman went on to develop the General Public License in the late 1980s, which has become the most popular open source license in use today. Advocates of open education will know that the GPL was the inspiration for the development of Creative Commons licenses in 2001. Arguably, the impact of spinning off Lisp Machines and Symbolics from the AI Lab in 1981 is still being felt and the 18 hackers that left to join those divergent startups can be considered as paradigmatic for many hackers since, conscious of whether they are working on shared, open source software or proprietary software.

Everything I have described above can be easily pieced together in a few hours from existing sources. What is never discussed in the literature of hacking is the institutional, political and legal climate during the late 1970s and early 1980s, and indeed the decades prior to this period that led to that moment for Stallman in 1984. In fact, most histories of hacking begin at MIT in 1961 with the Tech Model Railroad Club and, understandably, concentrate on the personalities and development of an ethic within the hacker community. What is never mentioned is what led to Greenblatt and Noftsker deciding to leave that ‘Garden of Eden’ and establish firms. What instruments at that time encouraged and permitted these men to commercialise their work at MIT? Much of what I have written above can be unravelled several decades to show how instrumental the development of higher education in the USA during the 20th century was to the creation of a hacker culture. The commercialisation of applied research; the development of Cybernetic theory and its influence on systems thinking, information theory and Artificial Intelligence; the vast sums of government defense funding poured into institutions such as MIT since WWII; the creation of the first venture capital firm by Harvard and MIT; and most recently, the success of Y-Combinator, the seed investment firm that initially sought to fund student hackers during their summer break, are all part of the historiography of hacking and the university.

Over the next few blog posts I will attempt to critically develop this narrative in more detail, starting with a discussion of the Bayh-Dole Act, introduced in 1980.

References

I’ve linked almost exclusively to Wikipedia articles in this blog post. It’s a convenient source that allows one to quickly sketch an idea. Much needs to be done to verify that information. There are a few books worth pointing out at this introductory stage of the narrative I’m trying to develop.

The classic journalistic account of the history of hacking is by Stephen Levy (1984) Hackers. Heroes of the Computer Revolution. I found this book fascinating, but it begins in 1958 with the Tech Model Railroad Club (chapter 1) and doesn’t offer any real discussion about the institutional and political cultures of the time which allowed a ‘hacker ethic’ (chapter 2) to emerge.

Eric Raymond’s writings are also worth reading. Raymond is a programmer and as a member of the hacker tradition has made several attempts to document it, including the classic account of the Linux open source project, The Cathedral and the Bazaar, and as Editor of the Jargon File, a glossary of hacker slang. Again, Raymond’s Brief History of Hackerdom, begins in the early 1960s with the Tech Model Railroad Club and does not reflect on the events leading up to that moment in history.

Another useful and influential book on hackers and hacking is Himanen (2001) The Hacker Ethic. Himanen is a sociologist and examines the meaning of the work of hackers and their values in light of the Protestant work ethic.

Tim Jordan’s 2008 book, Hacking, is a general introduction to hacker and cracker culture and provides an insightful and useful discussion around hacking and technological determinism. Like Himanen, Tim Jordan is also a sociologist.

Stallman’s book, Free Software Free Society (2002), offers a useful direct account of his time at MIT in chapter 1.

Sam Williams’ biography of Stallman, Free as In Freedom (2002), later revised by Stallman in collaboration with Williams (2010), is essential reading. Chapter 7 ‘A Stark Moral Choice’, offers a good account of the break up of Stallman’s hacker paradise in the early 1980s.

E. Gabriella Coleman’s book, Coding Freedom. The Ethics and Aesthetics of Hacking (2012) is an anthropological study of hackers, in particular the free software hackers of the Debian Linux/GNU operating system. Coleman’s book is especially useful as she identifies hackers and hacking as a liberal critique of liberalism. This might then be usefully extended to other movements that hackers have influenced such as Creative Commons.

Triplify: Make your blog mashable

Last week, I wrote about how it is relatively simple to ‘pimp your ride on the semantic web‘. Over the weekend, I stumbled upon Triplify, a small ‘plugin’ for pretty much any web publishing platform, that “reveals the semantic structures encoded in relational databases by making database content available as RDF, JSON or Linked Data.” What is so appealing about Triplify is how easy it is to implement, especially alongside a WordPress site.

I can confirm that the three-step installation process is all it takes, although I wouldn’t undertake implementing this blindly as you are, literally, exposing a semantic representation of your database content. In other words, you should look at the configuration file you’re using and check that it’s going to expose the right data and not clear text passwords and unpublished posts and comments. Before I  implemented it, I realised that it would expose comments on a bunch of posts that I have since made private (they were imported from an old, private blog), so I had to ‘unapprove’ those comments so the script didn’t expose them to the public. A five minute job. Alternatively, the script could probably be modified to work around my problem, by only exposing comments after a certain date, for example.

The end result is that, with a WordPress site, you expose a semantic representation of your users, posts, pages, tags, categories, comments and attachments in RDF (N-Triples) and JSON formatted data (for JSON, just add ‘?t-output=json’ to the end of the URI). Like I said though, it could be used on any database driven web application. Here’s what you get when you expose the high level links to your content:


<http://blog.josswinn.org/triplify/> <http://www.w3.org/2000/01/rdf-schema#comment> "Generated by Triplify V0.5 (http://Triplify.org)" .
<http://blog.josswinn.org/triplify/> <http://creativecommons.org/ns#license> <http://creativecommons.org/licenses/by/2.0/uk/> .
<http://blog.josswinn.org/triplify/post> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .
<http://blog.josswinn.org/triplify/attachment> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .
<http://blog.josswinn.org/triplify/tag> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .
<http://blog.josswinn.org/triplify/category> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .
<http://blog.josswinn.org/triplify/user> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .
<http://blog.josswinn.org/triplify/comment> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .

Here’s an example of what you get when you expose the full content:


<http://blog.josswinn.org/triplify/post/154> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://rdfs.org/sioc/ns#Post> .
<http://blog.josswinn.org/triplify/post/154> <http://rdfs.org/sioc/ns#has_creator> <http://blog.josswinn.org/triplify/user/1> .
<http://blog.josswinn.org/triplify/post/154> <http://purl.org/dc/terms/created> "2008-10-06T05:55:25"^^<http://www.w3.org/2001/XMLSchema#dateTime> .
<http://blog.josswinn.org/triplify/post/154> <http://rdfs.org/sioc/ns#content> "Up early to go to Sheffield for LPI exams. The last week has left me underprepared. Never mind." .
<http://blog.josswinn.org/triplify/post/154> <http://purl.org/dc/terms/modified> "2008-10-06T20:12:15"^^<http://www.w3.org/2001/XMLSchema#dateTime> .

...

<http://blog.josswinn.org/triplify/post/154> <http://www.holygoat.co.uk/owl/redwood/0.1/tags/taggedWithTag> <http://blog.josswinn.org/triplify/tag/27> .

...

<http://blog.josswinn.org/triplify/post/154> <http://www.holygoat.co.uk/owl/redwood/0.1/tags/taggedWithTag> <http://blog.josswinn.org/triplify/tag/41> .
<http://blog.josswinn.org/triplify/post/154> <http://www.holygoat.co.uk/owl/redwood/0.1/tags/taggedWithTag> <http://blog.josswinn.org/triplify/tag/42> .

...

<http://blog.josswinn.org/triplify/post/154> <http://sdp.iasi.rdsnet.ro/semantic-wordpress/vocabulary/belongsToCategory> <http://blog.josswinn.org/triplify/category/22> .

...

<http://blog.josswinn.org/triplify/tag/154> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.holygoat.co.uk/owl/redwood/0.1/tags/Tag> .
<http://blog.josswinn.org/triplify/tag/154> <http://www.holygoat.co.uk/owl/redwood/0.1/tags/tagName> "valentine" .

You can choose to expose different levels of information in your HTML source. If you have more than a moderate amount of content, you’ll probably want to just expose the top level links as in the first example and let the users of your data dig deeper. You’ll also note that you can (and should) attach a license to your data.

A number of namespaces are recognised as well as a WordPress vocabulary.


$triplify['namespaces']=array(
'vocabulary'=>'http://sdp.iasi.rdsnet.ro/semantic-wordpress/vocabulary/',
'rdf'=>'http://www.w3.org/1999/02/22-rdf-syntax-ns#',
'rdfs'=>'http://www.w3.org/2000/01/rdf-schema#',
'owl'=>'http://www.w3.org/2002/07/owl#',
'foaf'=>'http://xmlns.com/foaf/0.1/',
'sioc'=>'http://rdfs.org/sioc/ns#',
'sioctypes'=>'http://rdfs.org/sioc/types#',
'dc'=>'http://purl.org/dc/elements/1.1/',
'dcterms'=>'http://purl.org/dc/terms/',
'skos'=>'http://www.w3.org/2004/02/skos/core#',
'tag'=>'http://www.holygoat.co.uk/owl/redwood/0.1/tags/',
'xsd'=>'http://www.w3.org/2001/XMLSchema#',
'update'=>'http://triplify.org/vocabulary/update#',
);

So, what’s the point in doing this? Well, it’s fairly trivial and if you think that structured, linked, machine-readable licensed data is a Good Thing, why not?  The Triplify website lists an number of advantages:

Such a triplification of your Web application has tremendous advantages:

  • The installations of the Web application are better found and search engines can better evaluate the content.
  • Different installations of the Web application can easily syndicate arbitrary content without the need to adopt interfaces, content representations or protocols, even when the content structures change.
  • It is possible to create custom tailored search engines targeted at a certain niche. Imagine a search engine for products, which can be queried for digital cameras with high resolution and large zoom.

Ultimately, a triplification will counteract the centralization we faced through Google, YouTube and Facebook and lead to an increased democratization of the Web

The vision of the semantic web and semantic publishing is one of meaningfully identifying objects (and people) on the Internet and showing their relationships. This should improve searches for things on the web, but also improve how we exchange knowledge, re-use information and help clarify our identity on the web, too. It’s an ambitious task, but made easier with tools like Triplify.  The semantic web also raises questions over individual privacy and, if data is well formed and accessible, it may be easier to control and therefore censor. The creator of Triplify recently gave a technical presentation on Triplify and how it is being used to publish data collected by the OpenStreetMap project. It shows how geodata exposed in this way can result in mashup applications that directly benefit you and me.

Open Education Project Blueprint

Each participant on the Mozilla Open Education Course, has been asked to develop a project blueprint. Here is the start of mine. It’s basically a ‘Personal Learning Environment’ (PLE) ((See Personal Learning Environments: Challenging the dominant design of educational systems))and I’m going to try to show how WordPress MU is a good technology platform for an institution to easily and effectively support a PLE. I’m going to place an emphasis on ‘identity’ because it’s something I want to learn more about.

Short description

University students are at least 18 years old and have spent many years unconsciously accumulating or deliberately developing a digital identity. When people enter university they are expected to accept a new digital identity, one which may rarely acknowledge and easily exploit their preceding experience and productivity. Students are given a new email address, a university ID, expected to submit course work using new, institutionally unique tools and develop a portfolio of work over three to four years which is set apart from their existing portfolio of work and often difficult to fully exploit after graduation.

I think this will be increasingly questioned and resisted by individuals paying to study at university. Both students and staff will suffer this disconnect caused by institutions not employing available online technologies and standards rapidly enough. There is a legacy of universities expecting and being expected to provide online tools to staff and students. This was useful and necessary several years ago, but it’s now quite possible for individuals in the UK to study, learn and work apart from any institutional technology provision. For example, Google provides many of these tools and will have a longer relationship with the individual than the university is likely to.

Many students and staff are relinquishing institutional technology ties and an indicator of this is the massive % of students who do not use their university email address (96% in one case study). In the UK, universities are keen to accept mature, work-based and part-time students. For these students, university is just a single part of their lives and should not require the development of a digital identity that mainly serves the institution, rather than the individual.

How would it work?

Students identify themselves with their OpenID, which authenticates against a Shibboleth Service Provider. ((See the JISC Review of OpenID.)) They create, publish and syndicate their course work, privately or publicly using the web services of their choice. Students don’t turn in work for assessment, but rather publish their work for assessment under a CC license of their choice.

It’s basically a PLE project blueprint with an emphasis on identity and data-portability. I’m pretty sure I’m not going to get a fully working model to demonstrate by the end of the course, but I will try to show how existing technologies could be stitched together to achieve what I’m aiming for. Of course, the technologies are not really the issue here, the challenge is showing how this might work in an institutional context.

I think it will be possible to show how it’s technically possible using a single platform such as WordPress which has Facebook Connnect, OAuth, OpenID, Shibboleth and RPX plugins. WordPress is also microformat friendly and profile information can be easily exported in the hCard format. hResume would be ideal for developing an academic profile. The Diso project are leading the way in this area.

Similar projects:

UMW Blogs?

Open Technology:

OpenID, OAuth, RPX, Shibboleth, RSS, Atom, Microformats, XMPP, OPML, AtomPub, XML-RPC + WordPress

Open Content / Licensing:

I’ll look at how Creative Commons licensing may be compatible with our staff and student IP policies.

Open Pedagogy

No idea. This is a new area for me. I’m hoping that the Mozilla/CC Open Education course can point me in the right direction for this. Maybe you have some suggestions, too?

Mozilla & Creative Commons Open Education Course /1

Yesterday, I started a six week course on Open Education, organised by the Mozilla FoundationCreative Commons and The Peer 2 Peer University. Like all 26 participants, I’ll be blogging about the course with the tags ‘MozOpenEd’ or ‘MozOpenEdCourse’ and it looks like we’ll be aggregating as much online activity as possible into a blog I’m setting up here.

The course outline looks really interesting and yesterday’s first online session got off to a good start. Working with people both synchronously and asynchronously online is always interesting and the enthusiasm and democracy within the group is going to be quite infectious.  Some good Case Studies have been provided for discussion and we’ve each been asked to work on a blueprint project over the next six weeks. I’ll be working on student identity, data portability, portfolios and the transition to and from university, using this WordPress/BuddyPress platform I run at the University of Lincoln. My starting point is that university life should not necessarily encumber students with a further digital identity, should build on their existing digital identity and portfolio and that all digital products of student life should be portable as they are increasingly becoming elsewhere.

My thinking is not at all clear right now, but I’ve got a strong gut feeling that this topic is going to both interest me and be practically implementable. I’m working on a project with a colleague who teaches animation and it may be that I can combine these projects to the benefit of all. I signed up for the course so that I would be forced to clarify my thinking and hopefully benefit the work I’m already doing.

Anyway, more on all of this next week.

Open Calais + site-wide tags = semantic site architecture

Preamble about people

Over the last month, we’ve I’ve started to grow an embryonic social web publishing platform that can be many things but fundamentally offers a personalised and collaborative environment for research, teaching and learning. (Where? You’re looking at it!). There are a few active blogs (currently fewer than on the pilot Learning Lab blogs), nearly 70 users and the word is starting to get out at a pace that I can manage. So, now it’s time to look to the future…

By running BuddyPress, the connections between people are pretty much taken care of. Sign in to http://blogs.lincoln.ac.uk with a Lincoln username and password and you’ve joined a community that, as it grows, will increasingly and effortlessly connect people through the information they choose to add to their profile. Staff and students can click on a link and find other people who have similarly tagged their profile.

Notice the comma seprated hyper-linked data
Notice the comma-separated hyper-linked data

What is of equal interest to me, and potentially very useful to the university community, is how we link the content that is being generated by staff and students and make those links accessible. It is not difficult to appreciate what the potential is when you have a revolving community of 10,000 people who, over time, document their work, their research, teaching and learning using cutting edge web publishing tools, but I’m writing this post to try and understand and sketch out how I might evolve what I have begun.

Put simply, WordPress Multi-User (WPMU) allows one person (me) to provide and manage multiple web sites which other people (staff and students) take ownership of. Typically, every action, every new user and every new page and post on every site, is recorded and held in a shared database(s). Although at this low level, the data is relational, on the surface, when you look at one of the sites, they pretty much stand alone and so they should. We’re not talking about a single website with lots of users, we’re talking about lots of websites with lots of users. They might be working collaboratively with others, but they’re working as individuals or in distinct groups that benefit from a distinct online identity. BuddyPress helps bring things together by aggregating people’s actions (i.e. posting blog updates, making friends, joining groups, posting messages) but the visibility of those connections is transient. Social networks display our actions along a timeline and the connections between people are, for the most part, buried until the next time person A interacts with person Y.

Enough about connecting people.

Site-wide content aggregation

Site content is a mixture of text, multimedia and metadata. The last thing I’ll do when completing this blog post is to categorise and tag it. Each time I write, I publish text, (sometimes images) and metadata which summarises and categorises the full text. Why am I telling you this? You know it already. What you may not know is that each post created on our university WPMU installation, by any person, providing their blog is public, is aggregated into a single site and re-published a second time. So this post exists here on this site and there, on the Community Posts site. Notice how the Community Posts version links back to the original post. We’re not creating a whole new resource, we’re creating a powerful linked resource that allows others to search, filter, browse and discover content held across multiple sites. With only a few sites up and running here at the moment, the opportunity to discover varied content is limited, but over time that will change. Look at wordpress.com, where there are 5 million sites:

Browse by user-generated metadata

Search over 5 million sites
Search over 5 million sites

On the university blogs, this is made possible through the use of the site-wide-tags plugin, which was developed by @donncha, the same person that develops WPMU and the wordpress.com site. By using this plugin, a WPMU installation can share similar functionality to what you see on wordpress.com. I say ‘similar’ because, as I’ll mention later, designing how people discover content is key to all of this and something I, or we as a community, would benefit from thinking about and acting on collectively.

Community Posts
Community Posts

On the Community Posts site, you can search the full-text of every post, filter resources by category and tag, and subscribe to feeds from any combination of tag or category. Any search can be turned into a feed by appending ‘&feed=rss’ to the end of the resulting URL.

i.e. http://tags.blogs.lincoln.ac.uk/?s=gaming&feed=rss

To create a feed from a tag or category, just click on a tag or category and append ‘/feed’ to the end of the URL.

i.e. http://tags.blogs.lincoln.ac.uk/tag/games/

You can combine tags with ‘+’, too:

http://tags.blogs.lincoln.ac.uk/tag/games+development/

You can also specify the type of feed you want by appending:

/feed/rss/
/feed/rss2/
/feed/rdf/
/feed/atom/

Mixing categories and tags is currently broken by a bug but is due to be fixed in the next version of WordPress.

So it’s not difficult to imagine, over time, an active community of thousands of university web publishers, having their content aggregated into a site-wide resource that allows full text searching, browsing and filtering with a choice of feeds to syndicate that content elsewhere. See how it’s happening at the University of Mary Washington, where over 2400 sites have been created in under three years.

Semantic technology

Yesterday, I discovered OpenCalais. It’s a semantic technology that’s been around since January 2008, so you might be tired of hearing about it, but if not, ‘Welcome to Web 3.0!’

The Calais Web Service automatically creates rich semantic metadata for the content you submit – in well under a second. Using natural language processing, machine learning and other methods, Calais analyzes your document and finds the entities within it. But, Calais goes well beyond classic entity identification and returns the facts and events hidden within your text as well.

Nice. And it’s installed on this site. There are three Calais plugins available for WordPress. This one, allows writers to submit their blog posts to the OpenCalais web service API and fetch back a number of auto-generated tags based on the content of their post. The longer the post, the more tags are returned. Tags are returned in just seconds. Those tags can be added to the post in their entirety or used selectively (actually, you have to add them all and then remove those you don’t want to include – a minor irritation). This next plugin, allows you to automatically go through every post you’ve written and tags them using the Calais web service. It’s all or nothing, but following the auto-tagging of archive content, you can then go to the ‘tags’ menu and delete any tags you don’t want to use. I’ve done that to this site and to the Community Posts site. Calais looks for names, facts and events and the API allows for up to 40,000 transactions a day and up to four per second. It returns some predictable tags and a few odd ones, but on the whole is fast and works like magic.

The third plugin also allows blog authors to fetch tags for the post they are writing and, in addition, it also suggests Creative Commons licensed images based on a dynamic evaluation of the chosen or suggested tags.

The tagaroo interface
The tagaroo interface

Image suggestion is a nice idea, but tends to return some fairly generic images.

Having used OpenCalais to auto-tag the Community Posts site, a whole new and richer set of semantic metadata has been added with barely any effort. The challenge now is to figure out how to 1) automate this as a scheduled process, so that the Calais plugin looks for new content every hour, say, and tags whatever has been recently introduced (a cron job that calls the plugin and a modification to the plugin to look at the timestamp of the post and ignore anything older than when it was last run?); 2) present the semantic data in an accessible way and this mostly, I think, comes down to appropriate site design.  The wordpress.com screenshots above show one way of doing it. A del.icio.us style approach is a more powerful and versatile model of tag filtering. Until then, it’s a matter of constructing filters, searches and feeds in the way I’ve outlined above.

So how might all of this semantically structured data be used? It seems to me that most of the advantages are proportional to the quantity of information available. For teaching and learning, it could be used by students and staff who want to find and re-use material that has been posted in the past for a specific course or subject area. Great for new students who want to measure the type and quality of work produced by students in previous years. In a similar way, it could be used by staff looking for posts by colleagues on subjects they might be teaching, and because searches and tags can be turned into feeds, past content could be aggregated into a new course site. A widely adopted, semantically tagged WPMU installation could also reveal trends in the type of work occurring at the university and, by tagging names of people, queries against references to Prof. X’s work could be made (I also wonder whether through the use of feeds, content from the institutional repository could be joined up with all of this, too – but it’s late in the day and I can’t think straight).

You’ll see from the image below that using Calais on the Community Posts site, resulted in a much richer variety of tags than would have appeared if we relied on user-generated tagging alone (136 posts now have 558 tags). Some people don’t even bother to tag their work… Shame on them! Notice too, that with the Firefox Operator plugin, you can take a tag on the site and use it to find related resources elsewhere. So if you’re looking at work tagged ‘client-applications’ on WPMU, you can conveniently hop over to delicious and find further web resources or, on a whim, look at what books on this subject are available on Amazon.

Operator provides a way to use tags on one site to discover related resources on another site
Use tags on one site to discover related resources on another site

Anyway, if you’re still reading, you might remember from the title of this post that my overriding interest in all of this is how it can be understood as and developed into a site-wide ‘architecture’. Again, I’m thinking how user-generated tags have determined the way delicious is designed for navigation and searching of resources. I need to learn more about how WordPress themes are constructed and consider how available functions can be best exploited and usefully presented on this type of site. If you have any ideas or want to work on a specific theme to get the most out of the site-wide-tags plugin, please do leave a comment or get in touch on Twitter @josswinn

History of the Internet, PICOL and CC video

Just a couple of videos which I came across by accident. Both demonstrate how well information can be communicated through animated graphics and images. The first, History of the Internet, “is an animated documentary explaining the inventions from time-sharing to filesharing, from Arpanet to Internet.” I read Where Wizards Stay Up Late this year, which is a compelling read about the same subject. I can imagine the video being used as an effective teaching resource in class with the book included on a reading list.

[vimeo 2696386]

The video looks fantastic in HD on my 24″ iMac display 🙂 One of the reasons for this is the use of the PICOL icons, which are an impressive attempt to “find a standard and reduced sign system for electronic communication.” PICOL stands for Pictorial Communication Language and the icons are CC licensed. While reading about the PICOL project, I came across a decent video introducing Creative Commons, which I hadn’t seen before. I think I’ll use it for my Thinking Aloud seminar later this month.