Making sense of things: A PhD by Published Work

At this time of year we have our annual appraisals, which involves looking back on what personal and professional objectives I set myself for last year. One of those ambitions was to work towards submitting a ‘PhD by Published Work’ in 2014. In that respect, I think I’ve failed to really complete anything substantial. In the last 12 months, I have spent most of my time running JISC-funded projects ((Since January 2011, I am/have been PI on DIVERSE, Orbital, ON Course, Bebop and Linkey.)) and, related to that, trying to develop the LNCD group here at Lincoln. It is good work, and I enjoy it, but unlike most academics who receive funding for research and/or development, I am not that interested in writing papers about the work I do day-to-day. Technology related projects, such as those funded by JISC, produce lots of academic, peer-reviewed papers, but it has never occurred to me to write about this side of my work in a systematic way. Perhaps this was because I joined the university having been an Archivist and Project Manager elsewhere, and outside academia, the accomplishment of the project is itself the objective and writing up papers doesn’t really factor into the work. Also, the projects I run day-to-day, while I enjoy them and think they are of value, don’t provide me with any sense of meaning or purpose in the world, which is the reason why I entered higher education as an undergraduate and post-graduate student. It has never occurred to me that my vocation and my efforts in higher learning, should or would ever coincide until a year or so ago when I was offered a permanent academic contract at Lincoln, rather than the fixed-term, non-academic contracts I had been on. Being ‘an academic’ is quite unique in that it does combine a lifetime of higher learning with a vocation, something I have been slow to put into practice over the last year.

I need to get out of this habitual way of thinking and start to make direct connections with my day-to-day work and my intellectual aspirations, eventually completing a PhD. It is not easy and I would appreciate any advice you might have. Looking over my blog this morning, of the 209 posts I’ve written since April 2008, I have categorised 58 of them being related to what I want to pursue for my PhD. Doing this has allowed me to try to find some sense of coherence to my work, which I will outline here for my own benefit and perhaps even for your interest.

The first period of those posts covers the time I spent thinking and writing about ‘resilient education‘ with Richard Hall. It began with a failed bid to JISC around business continuity within HEIs in the face of an energy crisis. One question that framed our work at this time was, “What will Higher Education look like in a 2050 -80% +2c 450ppm world?” Over a number of blogs posts, face-to-face conversations, workshops, a conference paper and, eventually a journal paper, we both felt that the imperative of economic growth and therefore the social relations of capitalism are the root cause of the crises we face in society and therefore in higher education, too. It was through conversations with my colleague Prof. Mike Neary, that I began to shift from viewing this problem as technological, economic and cultural, to one that is fundamentally historical and political. At first, I was drawn to ideas that have circulated for some time among environmentalists and ‘ecological economists’, such as that of a non-growth-based ‘Steady State’ economy and the work of the Transition movement. However, I remained unsatisfied with these liberal solutions to liberal capitalism. ((The idea of a ‘steady state’ can be found in one of the classical texts of Liberalism, John Stuart Mills’ Principles of Political Economy.)) Of course, such approaches have some practical merit, but I soon found them intellectually impoverished compared to the approach of critical political economy.

The most challenging, stimulating and useful scholarship I have read over the last couple of years has been by writers in the Marxist tradition of critical political economy: Writers such as Simon Clarke, Moishe Postone, Mark Neocleus, Harry Cleaver, and Mike Neary have provided me with a radically different way of understanding the world and the challenges society and therefore higher education faces. So far, I have attempted to use the methodological tools provided by this critical, scholarly tradition in one book chapter, which I remain dissatisfied with but nevertheless see it has a building block to something more mature. That piece of writing was, if you like, my research output from the ChemistryFM OER project that I ran. As well as my article with Richard Hall, I also co-authored an article with Mike Neary, both of which could be seen as a critique of my work on that OER project.

In terms of scholarly progress, I see now that the article I wrote with Richard Hall was the culmination of a period trying to address the first ‘research question’ of ‘What will Higher Education look like in a 2050 -80% +2c 450ppm world?‘ By the time the article was written (and certainly by the time it was published), we had both moved on and in my book chapter and article with Mike Neary, I was trying to critique the approach that Richard and I had taken in the earlier article, which makes a case for open education as an approach to developing a ‘resilient education’. Today, I do not think that openness as it is commonly understood and practised, will re-produce a more resilient, sustainable higher education as we commonly understand and practise it. There is much more to be said about this and in doing so, I shifted my emphasis to thinking about what openness means and how it is practised inside and outside higher education. One of the earliest and most influential expressions of openness can be found in hacker culture.

During this period (the last year or so), I have been interested in hackers, hacking and how student hackers can re-produce the university. One of the reasons I made this shift was because in my work day-to-day, I was managing JISC-funded technology projects and, influenced by Mike Neary’s work on Student as Producer, I was employing students and recent graduates to work with me on these R&D projects. During the course of this work, I was also appealing to university management to support a more formal collaboration among different departments at the university, which focused on the role of students in re-producing the university through their work on technology projects. The result of those committee papers and discussions was LNCD.  Working closely with student hackers and more closely with the university developer community in general, I began to think about the history and practise of hacking. This work is on-going as I sketch out ideas on this blog. I began with a simple proposal that we should understand hacking as an academic practice, or rather as a development of the academic tradition. I developed this a little further in a short article for the Guardian, which reflected on hacking, Student as Producer and a student hacker conference that we organised at Lincoln with the DevCSI project, called DevXS.

The JISC projects, LNCD, the reflections on hacking and the DevXS conference were then written up in a case study commissioned by JISC on ‘institutional approaches to openness‘. The case study was called ‘Hacking the University’ and a similar version of it, along with an additional section by my colleague Dean Lockwood, will be published in a book next month.

As I continued in this vein, I began to think about hacking as both learning and as labour and tried to articulate this in a couple of blog posts about learning a craft and the university as a hackerspace. At that time, I thought that one intervention that I might make at Lincoln in trying to get students to challenge and re-produce ‘the university’ as an idea as well as a living institution, was to develop a course based on the model of hackerspaces and examine the work of hackers pedagogically in terms of a craft. I was also thinking about how funders like JISC could support this approach, too. This led me to look at popular models of funding, such as the ‘angel investment’ of Y-Combinator, which I am beginning to tie back into the history of hacking and the origins of venture capital in US universities. My two most recent posts in this area (here and here) have been sketches for an article I intend to write on the role of universities in the development of hacker culture. It’s only once I have examined and critiqued this aspect of hacker culture history that I feel I can move on to more substantive and specific questions relating to hacking, openness and freedom, and the relationship between students, universities and venture capital in producing a new form of vocationalism within higher education.

This is likely to form the major part of my PhD by publication but despite writing long reflections on this blog, it does not amount to anything that I can readily submit for the PhD. I have also neglected to apply any methodological critique to my recent writing about hackers and hacking and need to return to the central categories of Marx: e.g. value, fetishism, class struggle, alienation, each of which I see as central to the re-production of the work of hackers and hacking and therefore the role of the university.

I also feel that I am currently a long way from reconciling the projects that I do day-to-day and the critique of political economy that I have started. Last week, for example, I had a conference paper proposal accepted on an evaluation of CKAN for research data management. On the face of it, this paper would normally be a fairly straightforward critical appraisal of a piece of software and for the conference that is what I intend to write. But I know that I should take the opportunity to develop the paper into something more intellectually substantive and incorporate it into a negative critique of openness, open data and university research culture. Whatever I end up writing, I am going to ensure that from hereon, my time is spent bringing my writing together into a coherent PhD submission in two or three years time.

Finally, if you are interested, here is the section of university guidance that deals with ‘PhD by Published Work’ (PDF). There are a number of things I have to do between now and my submission, not least keep writing, but also seek clarification around the meaning of ‘a substantial contribution to the academic endeavour of the University’. Co-authored outputs are permissible, but I need to be much clearer on what the Faculty Research Degrees Board expect to be included. At the moment, I am assuming that only my book chapter on OER is submissible as part of the PhD. Not only that, but it is the only piece of writing that approaches the standard of intellectual work that I think I would want to submit. Of course, I could be persuaded otherwise… Nevertheless, I think I need to have four more pieces published in order to submit the PhD, as well as writing a a 5-10,000 word commentary. Expect updates on this blog as I work towards this and thanks for any advice you can offer.

Problems with RSS feed

David Kernohan complained recently that I don’t blog any more. Well, it’s true that I don’t blog *here* as much as I used to because over the last year or so I’ve been running several projects and, when I have time, often blog on the respective project sites.

However, I am still blogging here, but there’s been a problem with the Feedburner RSS feed. It seems that it’s been timing out and hasn’t picked up any posts since April 2012. I’ve been trying to fix it this morning but feedburner is pretty strict in how long it will wait to fetch a feed before it times out. I’m going to have to overhaul the blogs server to really fix it. In the meantime, why not subscribe to the actual blog feed, where you’ll see that I’ve written a number of posts since last April and am currently exploring the role of the university in the history of hacking.

Thanks for reading!

Example OUseful mashup for Online Journalism students

I co-teach Online Journalism for level three students with Bernie Russell and this week, Tony Hirst from the OU came to Lincoln to give his, now annual, data-driven journalism class. Bernie and I prep the students a few weeks beforehand and then Tony rolls in and packs as much into the class as he can, leaving me and Bernie to pick up the pieces 😉
We’re grateful for it.

Here are Tony’s slides from this week

If you’re a student struggling with the Wikipedia/Pipes/Google Maps exercise, here’s a working example that you can clone and work backwards through to understand how it works. It’s basically slide 6 of the presentation above.

UPDATE: What follows is broken because of changes to the Wikipedia source page structure, changes to Yahoo Pipes and changes to Google Docs. Trying to keep it working is a pain, so it will have to stay broken for now.

Start by looking at this Yahoo Pipe:

http://pipes.yahoo.com/joss_winn/oj3ddjwikipediamashup

When you’re signed into Yahoo Pipes, clone the pipe and then click on view source of that example above. You’ll see this:

Source of pipe

The CSV source is https://docs.google.com/spreadsheet/pub?key=0Arh4BnSV2XSIdG1aUmd2dlFkTjdwRjlnazdKTk5mckE&single=true&gid=0&range=A1%3AG138&output=csv

You can look at the spreadsheet that is pulling data in from Wikipedia here:

https://docs.google.com/spreadsheet/ccc?key=0Arh4BnSV2XSIdG1aUmd2dlFkTjdwRjlnazdKTk5mckE&usp=sharing

Note how I’ve fetched the CSV into Yahoo Pipes, defined the data I’m interested in, renamed two key attributes, renamed the title attribute to be ‘population’ and then used the location builder in a loop block to determine the geo-locations. Once that’s done, it runs in the Pipe like this:

Is this displaying correctly? I’ve found that embeds directly from Yahoo Pipes can be a bit flaky.

However, if you right click on the KML link and paste the KML link into the search box of Google maps, then you should see something like this:


View Larger Map

You can see that both Yahoo Pipes and Google Maps allow you to embed the map into any web page.

Give it a try and get in touch if you’re having trouble. Of course, we can talk about it in class, too.

How to build a Github in a university?

This post is not about building a source code repository in a university.

Alex posted this presentation to our group’s mailing list a while back. If you are a developer or work with developers, it will mean something to you. Take a look and then read on.

Github are one of a few companies that talk openly about how they organise themselves and their work. Valve (cf. their Handbook – PDF), Automattic and 37signals are similar examples of how technology companies are both using and building technologies to change the way they work and the way they understand the role of work in our lives. When I watch presentations of enviable working environments like Github, I try to think of how it translates to working in a university, which in many respects also offers an enviable working environment in that academics at least, manage our own time, pursue our own interests, receive decent paid holidays, a pension, full pay while sick, etc. etc. There might be aspects of university life which we complain about, but relative to most other work, it is good work. I enjoy it and on the whole find it very satisfying.

In LNCD, we’ve been watching companies like Github for a while now and have been trying to learn from them and put their approach to work into practice. It is not easy and, off the top of my head, here are a few reasons why:

  • It is not a case of companies like Github employing an ‘agile methodology’, there are significant structural differences between technology startups and large institutions, too.
  • Github is a relatively small (108 employees) company compared to a university, which in our case has over 1300 staff and 12,000 students.
  • Github offers a single service (github.com), which although comprises a large number of underlying technologies, provides focus and direction for employees. Such narrow focus could grow dull over time, but I’m sure that the scale of growth (1.9m users over 4 years) keeps things interesting and challenging and during that time new complimentary technologies have been introduced which they have leveraged (e.g. configuration management software like Puppet).
  • Github appears to be a rapidly growing, dynamic company that currently works on a 1:17500 staff:customer ratio. Their customer-base is technologically savy and so they no doubt benefit from fast and valuable feedback, which drives the product forward. They release a new version of their product/service between 20-40 times each day.
  • A university’s focus is broad, loosely arranged around the themes of research, teaching, learning and enterprise. ‘Education’ doesn’t sum it up. I can’t think of a word that does. Sometimes universities focus on ‘the student experience’ or ‘research excellence’. Within these themes there are then multiple disciplines (e.g. Engineering, Arts, Humanities, Social Science), which have their own needs and expectations of the institution that supports them. Traditionally, they have operated quite autonomously and still do at some institutions. Organisationally, they are brought together as Colleges, Faculties, Schools, Departments, Teams and Committees. Perhaps it helps if we think of universities as a federation of multiple organisations. In some ways, research groups are similar to firms.
  • Universities grow quite slowly. They typically require large amounts of land, building work and the staff:student ratio at Lincoln is 1:9, which includes non-academic staff. Communication and ‘feedback’ within a university tends to happen quite slowly, too. We have annual surveys of both staff and students, course evaluations, committees and ways of providing adhoc feedback, but compared to Github, a university is a relative stable organisation and they tend to have a great deal of longevity.
  • Github’s message to other companies who wish to emulate them is ‘automate everything’; that is, where a machine can do the work better than a human, the machine should do the work and that the short-term effort put in to taking this approach will reap rewards in the long-term. This approach frees up the relatively few staff they have to be productive and creative. So far, no-one has left Github to work elsewhere. Employees are not being made redundant because their work has been automated.
  • Github builds the tools it needs to develop as a company. There are a number of examples in the slides above. We use one of their tools here at Lincoln, called Hubot. Github (the product) might appear to be a single, coherent service, but it is built on a great many underlying services which are coupled together. Github (the company) have built the service from the ground up using largely open source technologies and, where necessary, developing their own glue and renting complimentary services such as Rackspace, Campfire and Heroku.
  • Github claims that is has no managers. I am a manger and I can tell you that as far as I’m concerned, the holy grail of management is to be able to do away with it. The problem is creating the conditions that allows this to happen. What might they be?
  • Here are a few ideas off the top of my head: A relatively small company, working on a prestige product, where employees feel fortunate to work and can quickly see the benefits of their work; they are aware that their work matters and that people rely on them; there is a great deal of focus on improving internal communication and automating processes wherever possible; they do not require face-to-face contact to be able to contribute, so they can recruit nationally and internationally without employees having to relocate; despite this freedom, all productivity is logged through version control software, campfire chatrooms, issue trackers, etc. which discourages freeloaders; and employees are encouraged to work on tasks/projects that interest them and might at some point bring benefits to other employees. Interestingly, Github employees regularly open source their project code, which suggests that even with the freedom to hack on stuff of interest, employees are not always required to turn over their IP to the company.
  • There is a culture of innovation in a university – it’s called ‘research’ – but that does not necessarily mean that the university itself as an institution is innovative. At Lincoln, I think that we have been innovative in our teaching and learning strategy and curriculum design, Student as Producer, but this has not yet translated to equivalent technological innovations. Much of the time, I feel like we try to keep up with technological innovations led from elsewhere.
  • Specific institutional innovation groups (e.g. ‘Skunkworks’ teams) seem very rare in higher education. There are Educational Technologists or similar in most universities but their remit is rarely research and development. Most Educational Technologists are regarded as support staff. I find it perplexing that institutions comprising of thousands of people do not typically have a small, dedicated R&D team that are core funded. Over time, the value that such a team could generate for the institution should be far greater than the relatively small cost of running it but it is an investment that may take years to recoup and some of its value will be hard to evidence as it may not result directly in income generation (i.e. patents, grants, commercial services, etc.) Through innovation, such a group could contribute to the reputation of the university as well as overall efficiencies, which are harder to place a value on.
  • Finally, on a more positive note, I think universities could be ideal places to grow future companies like Github. Y Combinator recognised this in its early days, and universities have played an important role in the history of hacker culture and venture capital. It seems quite feasible that universities could become hackerspaces whose work fed back into the transformation of research, teaching, learning and enterprise across the institution.

The ‘MIT model’ and the history of hacker culture

Lisp Machines Inc. brochure
Lisp Machines Inc. brochure

In my previous post from this series, I outlined a period in the early 1980s when the work of hackers at MIT was commercialised through the use of venture capital and as a result, those hackers stopped sharing code. As a response to this, Richard Stallman left MIT to start the GNU project and within a few years had created the GPL ‘copyleft’ license, the most popular open source license in use today.

I concluded by pointing to the Bayh-Dole Act (1980) as an event that is worth understanding when examining the role of universities in the history of hacker culture. In this post, I want to outline what I mean by this and throw out a few ideas that, I admit, need to be more fully explored.

The Bayh-Dole Act

The Bayh-Dole Act was enacted by US Congress in December 1980 to clarify title of ownership and encourage universities to freely exploit the IP they generated through government funded research. Until that time IP arising from research that was federally funded was, by default, owned by the government and it was left to each federal agency to determine IP arrangements. Since WWII, the majority of research taking place in US universities was (and remains) government funded and in the midst of the industrial downturn of the 1970s, IP resulting from research was recognised as an under-exploited source of national economic potential. ((In 2004 it was 62% of all university research. See AAU (2006) University Research: The Role of Government Funding))

As an aside, in the UK, the Patent Act of 1977 gave employers the legal entitlement to employees’ inventions, but it wasn’t until the Department of Trade and Industry’s White Paper, Realising our Potential (1993), that universities were encouraged to pursue patents arising from their research. Similarly, other countries followed the Bayh-Dole Act in the 1990s and 2000s. ((Rigby and Ramlogan outline this in the 2012 report, Compendium of Evidence on the Effectiveness of Innovation Policy Intervention: Support Measures for Exploiting Intellectual Property.)) Until that time, the title to IP generated by publicly funded research in UK universities was usually controlled by the National Research Development Corporation (1948-1981), the National Enterprise Board (1975-1981) or the British Technology Group (under public ownership from 1981-1991).

Similarly, prior to the Bayh-Dole Act in the US, federally funded IP was first controlled by the National Defence Research Committee (NDRC) (1940-1947), then the Office for Scientific Research and Development (OSRD) (1941-1947), and then by several organisations: the NSF, NIH, DOD, NASA, DOE, USDA and others. ((See Mowery, (2004) Ivory Tower and Industrial Innovation: University-Industry Technology Transfer Before and After the Bayh-Dole Act.)) Etzkowitz has noted that despite significant levels of federal funding to US universities since WWII, in 1978, less than 4% of government-owned patents had been licensed. By creating a mechanism for universities to own the title to their research, the Bayh-Dole Act provided an incentive and driver for institutional change and whereas “previously only very few universities had the interest and capabilities to patent and license technology invented on campus”, during 1980-1990, the number of universities with technology transfer offices went from 25 to 200. Etzkowitz argues that this played a role in the development of Silicone Valley and made research universities “an explicit part of the US innovation system by restructuring the relationship among university, industry and government.” Out of the Bayh-Dole Act, arose incubators, research parks, technology transfer arrangements and other entrepreneurial features of modern universities.

When I began to think about the Bayh-Dole Act, I wondered what effect it might have had on Richard Stallman, who around the same time as the Act was in place, saw hackers leaving the AI Lab and joining two start-up companies (Lisp Machines Inc. and Symbolics Inc.) which were pursuing the exploitation of government-funded research and development that originated at MIT.  Was the Bayh-Dole Act a catalyst in the development of the GNU project? The answer was, no, not really. In pursuing this line of inquiry I contacted Stallman who said that he wasn’t aware of the Bayh-Dole Act at that time.

“In 1980 we all supported commercial fabrication of Lisp machines, because we wanted people to be able to buy them.  Thus, no pressure on us was needed on that particular point. Only the details were controversial.  And we did not foresee the consequences. It could be that MIT’s method of releasing the source code to the two competing companies, which both made it proprietary and set the stage for the software war of 82/83, was facilitated in some way by Bayh-Dole. But I don’t know whether that is so.” ((Email from Stallman, 19th December 2012))

Stallman’s reply is a consistent reminder that his work has never been in opposition to the commercial exploitation of software, but rather the prevention of restrictions on freedom. Although there were clearly differences of opinion among MIT hackers about the way in which Lisp Machines should be commercialised, with a minority opposed to VC funding, the culture of MIT in 1980 was such that the Bayh-Dole Act was following MIT, rather than imposing anything significantly different onto its technology transfer processes. The Act was certainly one of several instruments that provided clarity around the ownership and commercial potential of software and other IP during that period (others were the Copyright Act of 1976 and two amendments in 1980), and by requiring all US universities to consider ways in which government funded research could be commercially exploited, research was to some extent marketised, creating a more intensive environment for technology transfer in which MIT and other universities found themselves competing. Etzkowitz summarises this as follows:

“Starting from practices developed early in the twentieth century at MIT, university technology transfer had become [with the Bayh-Dole Act] a significant input into industrial development. William Barton Roger’s mid-nineteenth-century vision of a university that would infuse industry with new technology has become universalied from a single school to the entire US academic research system. Greatly expanded with federal research funding, the US academic enterprise has become a key element of an indirect US industrial policy, involving university, industry and government. The origins and effects of the Bayh-Dole Act are a significant chapter in the spread of the MIT model and the rise of entrepreneurial science.” (Etzkowitz, 113)

The question then, is not about what effect the Bayh-Dole Act had on Stallman and his fellow hackers in 1980, but rather what was the ‘MIT model’, which was later universalised by the Bayh-Dole Act, and what unique role did the ‘MIT model’ play in the history of hacker culture?

The MIT model

MIT began as a ‘Land Grant’ university, partially funded by a government grant to establish science-based universities which would “promote the liberal and practical education of the industrial classes” ((Read a transcript of the Morrill Act)) while undertaking research to improve agriculture. Land Grants were provided under the Morrill Act of 1862 and were a response to many years of campaigning by farmers and agriculturalists for research institutions that would contribute to the improvement of US farming. The Act led to States being allocated federal land which was to be sold or donated in order to establish such universities. The European Polytechnic movement was also gaining popularity in the US and seen as a model for new applied science universities in contrast to the largely teaching universities that existed at that time. Following the Morrill Act, the Hatch Act (1867) and Smith Lever Act (1918) again encouraged applied research in US universities as well as building capacity for technology transfers, again with a specific focus on the needs of agriculture.

Until the Land Grant universities of the late 19th century, there were no ‘research universities’ in the US and even academic staff dedicated to research were rare. ((Richard C. Atkinson, William A. Blanpied (2008), Research Universities: Core of the US science and technology system)) Founded as both a teaching and research university with a remit to undertake applied science that could be transferred to industry, MIT has always had close contact with private enterprise; from early on in its history MIT had employed engineers from industry as members of its academic faculty. By the 1920s, these academics were noted for their consulting activities to the extent that there was tension within MIT between academics who felt that it was their job to focus on teaching and the needs of students, and those who spent a significant portion of their time focusing on the needs of industry. During the Great Depression of the 1930s, MIT was forced to confront this issue as it was accused by private consultants of effectively subsidizing academics to consult, amounting to unfair competition. As a result, a policy was established called the ‘one fifth rule’, whereby MIT academics could spend a day a week using MIT resources to undertake consulting services. “Such activities as providing advice, testing materials in university laboratories and solving problems at company sites had become so much a part of the work of academic engineering professors that it proved impossible to disentangle them from the academic role. Prominent professors felt that their connection to the industrial world through consultation was essential to their research and teaching.” ((Etzkowitz, 37)) The role of academics acting as consultants is now commonplace in universities but in the US, it was at MIT where the practice was first formalised.

To protect and further exploit the industrial research undertaken, the first patent policy at MIT was developed in 1932.  Such a policy allowed MIT to assert ownership of its research, which at that time was mostly internally funded, rather than it being freely exploited by industry partners. With a patent policy in place, MIT could license its research to industry and control its intellectual property.

To support this academic-industrial partnership, MIT was one of the first universities to establish a department to handle its commercial contracts and its Division of Industrial Cooperation (DIC) later became the model by which the government provided research funding for all other universities. Being unique among universities in having such a department, MIT was in an advantageous position when the US entered WWII. Between 1940 and 1942, MIT’s research funding increased from $105,000 to $5.2M (c. x50!) thanks to its foresight in starting internally-funded military research early and having bureaucratic processes in place to handle the large increase in research contracts. Prior to WWII, there was little government funding of research outside of the land-grant interests of agriculture with around 5% of university funding coming from government and 40% of that relating to agriculture. Since 1946 federal funding to US universities has risen to between 12% and 26% of income, settling, on average, at around 15% in the 1980s; this funding is dispersed very unevenly and MIT is always one of the top recipients. ((See Lewontin in The Cold War and the University))

There is much to say (in a later post) about what gave MIT the privileged position as a centre for government research during WWII, which led to the formation of military-funded Labs such as that which Stallman worked in during the 1970s.  By 1940, through its entrepreneurial efforts, which I have just skimmed over here, MIT had become the model for the military-industrial-academic relationship which has continued to this day.

An entrepreneurial environment

In 1978, the AI Lab received additional DARPA funding for Lisp Machines and in 1979 discussions began taking place within the Lab about the commercialisation of Lisp Machines. Differences emerged between Noftsker and Greenblatt around the form that commercialisation should take and this delayed the enterprises until 1981. It would be interesting to know whether the Bayh-Dole Act did help crystallise decision-making among AI Lab management at that time, although more likely it was the knowledge that Xerox had developed a Lisp Machine in 1979 that caused Noftsker and Greenblatt to consider commercialising their work. Tom Knight, who with Greenblatt was the original designer of MIT’s Lisp Machine, has said that in the late 1970s, “MIT was not in the business of making computers for other labs, and was not about to build them” ((If It Works, It’s Not AI: A Commercial Look at Artificial Intelligence Startups)) but as external demand for them grew, Noftsker and Greenblatt developed their own responses in the form of Symbolics Inc. and Lisp Machines Inc. and Stallman’s hacker community began to crumble.

Stallman worked to keep the Lisp Machine source code free to share until 1983, when, presumably, he saw little hope of reforming the community of hackers that he longed for and could see that academic culture and the intellectual property regime had changed in ways that were no longer compatible with sharing software. MIT’s long history of entrepreneurialism and the more recent obligation to commercialise government-funded research suggests to me that the writing was on the wall for Stallman and the free sharing of source code within academia until the late 1980s when responses from within academia to this new regime were created: The GPL (GNU), BSD (University of California, Berkeley) and MIT licenses.

The ‘MIT model’, later universalised within the US as the Bayh-Dole Act, provided universities with the legal means and obligation to exploit federally-funded research. In doing so, it established not just a mechanism for patenting but an academic environment that was overall more entrepreneurial as it allowed universities to create partnerships with individual academics who would themselves profit from their research. Researchers working at MIT in the 1970s, were encouraged and well-supported to look for commercial opportunities deriving from their work. What the Bayh-Dole Act did for MIT and other universities, was provide clarification and incentives for exploiting government-funded R&D, which were previously absent. As Etzkowitz states: “In addition to rationalising and legitimsing university patenting and licensing, the law induced a phsychological change in attitudes towards technology transfer as well as an organisational change in encouraging the creation of offices for this purpose.” (p.114)

In my next post, I shall address the role of military funding in the development of hacker culture and the Labs which became playgrounds for hacking.

New article published: Open education: Common(s), commonism and the new common wealth

Mike Neary and I had an article published recently that offers a critique of ‘the commons’ with particular reference to open education and open educational resources. It was published by Ephemera and can be downloaded directly from this link [PDF].

The article began as a paper for the 7th annual Open Education conference, held in Barcelona in 2010. The conference is international in scope and size and seen as the “annual reunion of the open education family.” Our paper was unusual in that it was one of very few attempts at the conference to offer a critique of the central tenet of open education: the commons. The response to our paper was mixed, but there seemed to be an appetite among some delegates for further critique from the perspective of critical political economy.

The paper begins with an outline of the open education and open educational resources movement, situating it within the ‘free culture’ movement, which has grown out of the free software movement of the 1990s. This connection with the world of software and other intangible goods, throws up questions around how the apparent freedom of the digital commons can be traversed to the physical world of public education and its institutions. We remain unconvinced that “revolutionary” transformations in how digital property is exchanged can effect revolutionary change in the way we work as educators and students and that the ‘creative commons’ of the free culture movement is open to much of the same critique that its liberal foundations can be subjected to.

The majority of the article offers such a critique, beginning with an examination of how the ‘commons’ has been recently articulated by Marxist scholars. Here too, we remain unsatisfied with the consumerist focus on the redistribution of resources (i.e. exchange), without resolving the issue of production (i.e. labour). In response, we argue that the sites and structures of production, our institutions, should be the focus of critique and transformation so that knowledge as a form of social wealth is not simply shared, but re-appropriated to form a new common sense that is capable of questioning what should constitute the nature of wealth in a post-capitalist society.

The role of the university in the development of hacker culture

PDP-10
A PDP-10 computer from the 1970s.

The picture above is of a PDP-10 computer similar to that found in universities during the 1970s. The PDP-10 was developed between 1968-1983 by Digital Equipment Corporation (DEC) and is a useful point of reference for looking backwards and forwards through the history of hacking.  The PDP-10 (and its predecessor the PDP-6) were two of the first ‘time-sharing‘ computers, which among other significant technological developments, increased access to computers at MIT. Hackers working in the MIT Artifical Intelligence Lab (AI Lab) wrote their own operating system for the PDP-6 and PDP-10 called ITS, the Incompatible Timesharing System, to replace the Compatible Time Sharing System (CTSS) developed by MIT’s Computation Centre. Richard Stallman, who Levy describes as “the last true hacker”, was a graduate student and then AI Lab staff system hacker who devoted his time to improving ITS and writing applications to run on the computer. Stallman describes the Lab during the 13 years he worked there as “like the Garden of Eden”, a paradise where a community of hackers shared their work.

However, this period came to a bitter end in 1981, when most of the hackers working with Stallman left to join two companies spun off from the Lab. Four left to join Lisp Machines, Inc. (LMI), led by Stallman’s mentor, the ‘hacker’s hacker’, Richard Greenblatt, while 14 of the AI Lab staff left to join Symbolics, Inc. a company led by Russell Noftsker, who was Head of the AI Lab for eight years and had hired Stallman. (Noftsker had taken over from the original Director, Marvin Minsky, who worked on MIT’s DARPA-funded Project MAC, which later became the AI Lab). For a while in 1979, Noftsker and Greenblatt discussed setting up a company together that sold Lisp Machines, but they disagreed on how to initially fund the business. Greenblatt wanted to rely on reinvesting early customer orders and retain full control over the company while Noftsker was keen to use a larger amount of venture capital, accepting that some control of the company would be given up to the investors. Greenblatt and Noftsker couldn’t agree and so set up companies independent of each other, attracting most of the ITS hackers in the AI Lab to the extent that Stallman’s beloved community collapsed. With maintenance and development of ITS decimated, administrators of the AI Lab decided to switch to TOPS-20, DEC’s proprietary operating system when a new PDP-10 was purchased in 1982. A year later, DEC ceased production of the PDP-10 which Stallman described as “the last nail in the coffin of ITS; 15 years of work went up in smoke.”

Lisp Machines went bankrupt in 1985 while Symbolics remained active until the end of the Cold War when the military’s appetite for AI technologies slowed down and subsequently the company declined. One more thing worth noting about these two AI Lab spin-offs is that within a year of doing business, Stallman and Symbolics clashed over matters of sharing code. Having been deserted by his fellow hackers, Stallman made efforts to ensure that everyone continued to benefit from Symbolics enhancements to the Lisp Machine code, regularly merging Symbolics code with MIT’s version, which Greenblatt’s company used. Stallman was acting as a middle-man between the two code bases and the split community of hackers. Like other MIT customers, Symbolics licensed the Lisp Machine code from MIT and began to insist that their changes to the source code could not be redistributed beyond MIT, thereby cutting off Greenblatt’s Lisp Machines, Inc. and other MIT customers. Stallman’s efforts to keep the old AI Lab hacker community together through the sharing of distributed code came to an end.

In an essay by Stallman, he writes about how this was a defining moment in his life from which he resolved to start the GNU Project and write his own ‘free’ operating system. In 1984, Stallman left his job at MIT so as to ensure that the university didn’t have any claim to the copyright of his work, however he remained as a guest of the AI Lab at the invitation of the Director, Patrick Winston, and still does so today. If you are at all familiar with the history of free software and the open source movement, you will know that Stallman went on to develop the General Public License in the late 1980s, which has become the most popular open source license in use today. Advocates of open education will know that the GPL was the inspiration for the development of Creative Commons licenses in 2001. Arguably, the impact of spinning off Lisp Machines and Symbolics from the AI Lab in 1981 is still being felt and the 18 hackers that left to join those divergent startups can be considered as paradigmatic for many hackers since, conscious of whether they are working on shared, open source software or proprietary software.

Everything I have described above can be easily pieced together in a few hours from existing sources. What is never discussed in the literature of hacking is the institutional, political and legal climate during the late 1970s and early 1980s, and indeed the decades prior to this period that led to that moment for Stallman in 1984. In fact, most histories of hacking begin at MIT in 1961 with the Tech Model Railroad Club and, understandably, concentrate on the personalities and development of an ethic within the hacker community. What is never mentioned is what led to Greenblatt and Noftsker deciding to leave that ‘Garden of Eden’ and establish firms. What instruments at that time encouraged and permitted these men to commercialise their work at MIT? Much of what I have written above can be unravelled several decades to show how instrumental the development of higher education in the USA during the 20th century was to the creation of a hacker culture. The commercialisation of applied research; the development of Cybernetic theory and its influence on systems thinking, information theory and Artificial Intelligence; the vast sums of government defense funding poured into institutions such as MIT since WWII; the creation of the first venture capital firm by Harvard and MIT; and most recently, the success of Y-Combinator, the seed investment firm that initially sought to fund student hackers during their summer break, are all part of the historiography of hacking and the university.

Over the next few blog posts I will attempt to critically develop this narrative in more detail, starting with a discussion of the Bayh-Dole Act, introduced in 1980.

References

I’ve linked almost exclusively to Wikipedia articles in this blog post. It’s a convenient source that allows one to quickly sketch an idea. Much needs to be done to verify that information. There are a few books worth pointing out at this introductory stage of the narrative I’m trying to develop.

The classic journalistic account of the history of hacking is by Stephen Levy (1984) Hackers. Heroes of the Computer Revolution. I found this book fascinating, but it begins in 1958 with the Tech Model Railroad Club (chapter 1) and doesn’t offer any real discussion about the institutional and political cultures of the time which allowed a ‘hacker ethic’ (chapter 2) to emerge.

Eric Raymond’s writings are also worth reading. Raymond is a programmer and as a member of the hacker tradition has made several attempts to document it, including the classic account of the Linux open source project, The Cathedral and the Bazaar, and as Editor of the Jargon File, a glossary of hacker slang. Again, Raymond’s Brief History of Hackerdom, begins in the early 1960s with the Tech Model Railroad Club and does not reflect on the events leading up to that moment in history.

Another useful and influential book on hackers and hacking is Himanen (2001) The Hacker Ethic. Himanen is a sociologist and examines the meaning of the work of hackers and their values in light of the Protestant work ethic.

Tim Jordan’s 2008 book, Hacking, is a general introduction to hacker and cracker culture and provides an insightful and useful discussion around hacking and technological determinism. Like Himanen, Tim Jordan is also a sociologist.

Stallman’s book, Free Software Free Society (2002), offers a useful direct account of his time at MIT in chapter 1.

Sam Williams’ biography of Stallman, Free as In Freedom (2002), later revised by Stallman in collaboration with Williams (2010), is essential reading. Chapter 7 ‘A Stark Moral Choice’, offers a good account of the break up of Stallman’s hacker paradise in the early 1980s.

E. Gabriella Coleman’s book, Coding Freedom. The Ethics and Aesthetics of Hacking (2012) is an anthropological study of hackers, in particular the free software hackers of the Debian Linux/GNU operating system. Coleman’s book is especially useful as she identifies hackers and hacking as a liberal critique of liberalism. This might then be usefully extended to other movements that hackers have influenced such as Creative Commons.

Seminar: Hacking and the University

Hacking and the University

The role of the university in the development of hacker culture

The standard history of hacking begins with the Tech Model Railroad Club at MIT in 1961 and has continued to be closely associated with academic culture. Why is this so and what intellectual and institutional culture led to the development of a ‘hacker ethic’?

This seminar will propose a history of hacking in universities from the early 20th century, taking into consideration the role of military sponsored research, the emergence of the ‘triple helix’ of academic, commercial and government enterprise, the influence of WWII cybernetic theory, and how the meritocracy of academia gave rise to Y-Combinator, the most successful Internet angel investment fund there is today.

Part of the Centre for Educational Research and Development’s Thinking Aloud seminar series.

November 27th, 1-2pm, MB1012