Digital Education

Late last year, I worked with colleagues to propose a way forward for implementing the university’s ‘Digital Education Plan‘. Here’s the document that went through university committees and is currently being put into action. A significant investment is being made at Lincoln to support and develop teaching and learning, teacher education and student engagement, under which the recommendations below will be resourced. I continue to have input into all four areas of work recommended. I am leading on the proposed Masters research degree, based on the idea of ‘the university as a hackerspace’, which I have written about previously.

Implementation of the Digital Education Plan

Introduction

This paper sets out four areas of work to enhance and support digital education at Lincoln. These activities are derived from the university’s Digital Education Plan (May 2013),  a report following an HEA/Leadership Foundation consultancy (July 2013) and ongoing discussions by the Digital Education Strategy Core group. The implementation of a digital education plan at Lincoln is grounded in two institution-wide projects undertaken over the last five years: Student as Producer and Learning Landscapes. Both of these major projects, as well as other related initiatives, inform the approach and objectives for implementing the Digital Education Plan. In summary, the four areas of work are:

  1. A cross-university digital education group
  2. Incentives and recognition
  3. A multidisciplinary Masters research programme
  4. A framework for re-engineering space and time

A cross-university digital education group

Digital education at Lincoln requires on-going co-ordination and support. A cross university group focusing on innovation and support for technology for education should be established within CERD, with formal membership from across the Colleges and key service departments. This group will build on knowledge gained and lessons learned from the LNCD group.[1]

As well as some existing staff from CERD, the group should have three newly established ‘Digital Education Developer’ posts who are attached to each college. Staff from ICT, Estates and the Library should also be formally attached to the group. These core staff would engage with colleagues and students from across the university. They would encourage and support wider efforts across Colleges to undertake applied research and development into digital education. This ‘hub and spoke’ model of co-ordinating support, innovation, research and development across institutions is widespread in the sector.[2]

A core principle of the group will be that students and staff have much to learn from each other and that students as producers can be agents of change in the use of technology for education.

Incentives and Recognition

Next, we propose to focus on developing ‘digital literacy’ and ‘digital scholarship’ across the staff and student population of the university. These are broad, inclusive terms that encompasses a range of skills, experiences and critical approaches required for teaching, learning and research, effectively providing the groundwork for any successful implementation of the digital education plan. Digital education at Lincoln should engender an environment whereby students and staff are encouraged and supported in being not just consumers of technology but rather productive, critical, digitally literate social individuals. In this view, technology is inclusive and understated, advocated by the institution primarily to develop the individual’s critical, social understanding and abilities which they apply to their learning and in this way technology does not become an end in itself. In the development of critical, digital literacy, we can raise awareness of the nature and effects of inequitable digital practices and in doing so, encourage socially responsible individuals who are then able to challenge exclusive practices and ensure inclusive ways of living in society.

To help achieve this, we will develop optional, accredited modules for both staff and students:

  1. Continuation of the existing 30 credit module, Teaching and Learning in the Digital Age (TELEDA). This has been developed for staff by Sue Watling (CERD) and was successfully piloted in 2012-13. Its primary focus is how online communication and collaborative group work can enable critical thinking and reflection.
  2. An accredited module for staff and students will be developed which is formally aligned with the new ‘Mozilla Web Literacy Standard’[3] and other appropriate models.[4] This will be made available to any student from September 2014 as part of the Lincoln Award and available to staff as part of CPD.
  3. A 30 credit module in digital scholarship will be developed for staff and post-graduate students. Taken in addition to the TELEDA module, this would lead to a Post-Graduate Certificate for staff. It will also be available as an optional module for the MA in Education (in development).

In addition to accredited taught modules, the proposed digital education group will offer support and incentives to staff and students who wish to engage with digital education at the university, through annual Internships, research grants/bursaries and internal and external applications for funding. Through our experience of the Fund for Educational Development (FED) and Undergraduate Research Opportunity Scheme (UROS) funds, we know this is an effective method of engaging staff and students in research and development that produces tangible research outcomes and will help further the critical development of digital education at Lincoln.

A multi-disciplinary Masters research programme

In addition to the digital education group, we propose a new academic programme which will act as a focal point for teaching, research and development of new technologies that maintain Lincoln at the cutting edge of digital education and technology culture. Influenced by the rapidly emerging ‘hackerspaces’[5], the programme will seek to learn from what we see happening in hacker culture: new reputational models, ‘fablabs’ and ‘hacklabs’, commons-based peer-production, and new methods of innovation funding. This research-based, postgraduate programme should be cross-disciplinary and always experimental in its form and content; a sandpit for innovation, engaging academics and students in the sciences, arts, media and humanities to think deeply about the way technology is used for research, teaching and learning and the social good.

This cross-disciplinary research programme will allow students with different disciplinary backgrounds and interests to spend their degree in a physical university space, working together on ideas of their own under the guidance of experienced staff. To bootstrap the Programme, the first intake of 10 students should be recruited with a wide range of artistic, technical and scientific experience and they would pay no tuition fees. In following years, tuition fees would be payable but waivable depending on the accomplishments and impact of the student’s work. Impact could be measured in a variety of ways: technical, commercial, societal and educational. Over time, successful alumni would help attract more students to the programme, developing a culture of hackers and successful startups attached to the research programme. To further attract and incentivise students, ‘angel funding’ should be available on a competitive basis for student projects which show further potential.[6]

The programme combines cross-disciplinary research and development, teaching, learning and enterprise, but it recognises that those processes are changing and that hackers and entrepreneurs are developing a model that is replacing these functions of the university: the opportunities for learning, collaboration, reputation building/accreditation and access to cheap hardware and software for prototyping ideas, can and are taking place outside universities, and so they should. However, university culture is still a place where the hacker ethic (respect for good ideas, meritocracy, autonomy, curiosity, fixing things, anti-technological determinism, peer review, perpetual learning, etc.) remains relevant and respected.

A framework for re-engineering space and time

Finally, we propose to develop a framework for staff and students to analyse and re-engineer the use of space and time at the University of Lincoln. To create this, we will undertake a comprehensive investigation into the form and content of a large, lecture-based module and its objectives and constraints. In doing so, we will provide staff and students with a toolkit for evaluating and re-designing the space-time of their own programmes through better use of teaching and learning time and the blended use of the physical and virtual space of the university. This guided evaluation would also be built into one or more of the above mentioned teacher education modules so that staff reflect on the way technology may change the way their modules are designed and delivered.

The virtual space – cyberspace – allows us to think critically and imaginatively about the idea and form of university we desire. This approach was central to the Learning Landscapes project, highlighting how critical pedagogy can be used as a design principle, a resource in the design and construction of a counter-space, providing critical tools with which we ‘reverse imagineer’ the university. The ‘edgelessness’[7] of cyberspace allows for ‘Utopian thinking’ through which the constraints of traditional hierarchies of research, teaching and learning become “manifest as entirely different spatial forms and temporal rhythms.”[8] Arguably, we’re already seeing this Utopian thinking in the forms of the Open Data, Open Access and Open Education movements.

Based on the recommendations of the original Learning Landscape project, we propose the following overarching objectives for digital education at Lincoln:

  • Drive research into the effective design and development of digital education
  • Provide support to teachers and students for Utopian thinking and experimentation on the web
  • Include students as clients and collaborators in the design of university digital services
  • Be academically credible. Digital education should not simply be a technical exercise removed from the academic rigour of the university
  • Understand the relationship between space and time: it’s not just ‘cyberspace‘, but space-time
  • Articulate the institution’s vision and mission as a connected, networked whole
  • Create incentives. Recognise and reward innovation across all staff and students
  • Create formal and informal management structures that support strategic experimentation and imagineering (e.g. ‘think tanks’, ‘sand pits’, ‘skunk works’)
  • Avoid stereotyping. Bring people together from across subject areas and professions so as to avoid an ‘us and them’ attitude
  • Intellectualise the issues. Generate debate on the nature of academic values and the role and purpose of higher education: the idea of digital education is synonymous with the idea of the university.

In essence, digital education points to a learning landscape that is designed to engender capable, confident and critical individuals engaged in research, teaching and learning, so that they are active producers of their own social world.

Recommendations

We ask that the Education Committee recommends the following:

  1. Urgent recruitment of three Digital Education Developer posts
  2. The development of two further modules of study relating to digital literacy and digital scholarship
  3. The development of a cross-disciplinary Masters level research programme
  4. Research and development into the re-engineering of lecture space-time
  5. On-going investment to support these initiatives.

[2] Walker, Voce and Ahmed (2012) Survey of Technology Enhanced Learning for higher education in the UK, UCISA. p.91

[4] e.g. SCONUL Seven Pillars of Information Literacy (2012)

[6] The Y Combinator (http://ycombinator.com) model of angel funding is a good example of this already happening, where students and recent graduates receive a small amount of funding and lots of support, in return for just a very small share of their enterprise. JISC are also experimenting with this style of ‘angel’ funding in their Summer of Student Innovation programme. http://elevator.jisc.ac.uk

[8] Harvey, David (2000) Space of Hope, University of California Press, 237-8

Stallman at Lincoln: ‘A Free Digital Society’

UPDATE (30-11-13): Here is a recording of Richard Stallman’s talk at the University of Lincoln. He spoke for over two hours and then took questions. My recording pauses at 1 hour 15 mins as the batteries ran out on my recorder and I switched to my phone. Only a couple of minutes was missed in the change over. The quality of the recording throughout is good. Stallman’s talk will be familiar to anyone who has followed his work and it provides a broad and structured introduction to his arguments. A key point for me during the evening was from 2:13 to 2:18hrs, when a member of the audience was asking Stallman about the need for innovation and progress. Wasn’t non-free, proprietary software acceptable if it results in the saving of lives, as in the health industry? Stallman said that he would chose freedom over progress and innovation. “Freedom is more important than these things!.. Sometimes people sacrifice their lives for freedom!”

Stallman’s views are difficult to fully take on board because their implications are so far reaching. Although he is well known for his role in the history of computing, his talk was not primarily about technology but about power and the relationship between citizens and the state. His views follow a liberal (libertarian?) tradition of arguing for liberty and freedom, but are combined with strong socialist reforms to ensure a redistribution of wealth and guarantee equality and inclusion in society. He also makes recurrent moral and ethical arguments – often phrased in terms of good and evil – that I interpret as attempts to cut through the complexity of moral philosophy and speak to common sense.

In person, Stallman is odd, funny, sincere and forceful in his views. I’m not convinced that he fully understands the implications of his arguments as truly revolutionary but at this point in time I think they are. I think there’s a need to study his work from the point of view of political philosophy and work with him to fully connect his arguments with liberal and socialist traditions of thought and political action. For sure, it would lead to effective critiques of his views and in doing so help strengthen his arguments and make them more forceful and compelling.

Download the MP3 file.

Creative Commons Licence
Richard Stallman’s talk is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.

Reposted from Lincoln’s staff mailing list:

Eminent computer programmer and software freedom pioneer Richard Stallman will be giving a free public lecture at the University of Lincoln.

Stallman, often known by his initials RMS, is best known for creating a computer operating system composed entirely of free software. He pioneered the concept of ‘copyleft’, which uses the principles of copyright law to preserve the right to use, modify and distribute free software, and is the main author of free software licenses which describe those terms, most notably the GNU General Public License (GPL), the most widely used free software license.

Since the mid-1990s, Stallman has spent most of his time advocating for free software, as well as campaigning against software patents, digital rights management, and other legal and technical systems which he sees as taking away users’ freedoms.

Stallman said: “There are many threats to freedom in the digital society. They include massive surveillance, censorship, digital handcuffs, non-free software that controls users, and the War on Sharing. Other threats come from use of web services. Finally, we have no positive right to do anything in the Internet; every activity is precarious, and can continue only as long as companies are willing to cooperate with it.”

Tom Feltwell, from the University’s School of Computer Science, said: “Richard Stallman is very highly regarded and we are incredibly pleased to have him speaking at the University of Lincoln. His talk will be extremely interesting and non-technical so we would love to see members of the public come along to this special event.”

Stallman’s talk on ‘A Free Digital Society’ will take place at 6pm on Friday, 29th November in the Jackson Lecture Theatre, Main Administration Building, Brayford Pool Campus, University of Lincoln.

To register for the free talk please go to https://crm.fsf.org/civicrm/profile/create?gid=240&reset=1

Research groups operate as firm-like entities

One of the sources I have drawn from in my research on the pre-history of hacking in the university is Henry Etzkowitz. His research into the history of MIT and ‘entrepreneurial science’ was especially useful and interesting. As something I want to come back to at a later date, I’ll leave you with this quote from his paper, Research groups as ‘quasi-firms’: the invention of the entrepreneurial university, Research Policy 32 (2003) 109–121 (PDF). I was reminded of this by an article in Tuesday’s Guardian newspaper.

Research groups operate as firm-like entities, lacking only a direct profit motive to make them a company. In the sciences, especially, professors are expected to be team leaders and team members, with the exception of technicians, are scientists in training. As group size increases to about seven or eight members, professors who formerly were doing research are typically compelled to remove themselves from the bench to devote virtually full time to organizational tasks. Often persons in this situation describe themselves as “running a small business”. To continue at a competitive level with their peers, they must maintain an organizational momentum. Once having attained this goal, it is extremely difficult to function again as an individual researcher, so every effort is made to sustain leadership of a group.

Research groups within universities operate as small businesses lacking only a direct profit motive. Discuss!

Hackers, War and Venture Capital

In my previous post of this series, I discussed the role of military funding in the formation of a ‘genealogy’ of university laboratories, their projects, and staff which produced the conditions for hacking during the 1960s and 70s. As I drafted that post, I found myself drifting into a discussion around the role of venture capital but I have split that discussion into this final post below so as to highlight another important aspect in the study of the role of the university in the development of hacker culture.

Levy (1985) points to the arrival in 1959 of the TX-0 computer as a seminal moment in the history of hacking. The computer had been donated by the Lincoln Laboratory to MIT’s Research Laboratory of Electronics (RLE), the original successor of the Rad Lab and today, “MIT’s leading entrepreneurial interdisciplinary research organization.” Similarly, Eric Raymond points to the arrival at the RLE of the PDP-1 computer in 1961 as the moment that defined the beginning of ‘hackerdom’. Notably, at that time the RLE shared the same building as the Tech Model Railroad Club (TMRC), the legendary home of the first hackers. The history of hacking is understandably tied to the introduction of machines like the TX-0 and PDP-1 just as Richard Stallman refers to the demise of the PDP-10 as “the last nail in the coffin” for 15 years of work at MIT. Given the crucial significance of these machines, a history of hacking should include a history of key technologies which excited and enabled those students and researchers to hack at MIT in the early 1960s. To some extent, Levy’s book achieves this. However, in undertaking a history of machines, we necessarily undertake a social history of technology and the institutions and conditions which reproduced its development and in doing so we reveal the social relations of the university, the state and industry (Noble, 1977, 1984).

The birth of Digital Equipment Corporation

In 1947, the US Navy funded MIT’s Servomechanisms Lab to run Project Whirlwind to develop a computer that tracked live radar data. The Whirlwind project was led by Jay Forrester, leading systems theorist and principle inventor of magnetic core memory (the patenting of which was marked by a dispute between MIT and the Research Corporation resulting in the cancellation of MIT’s contract with the Corporation).

MIT’s Lincoln Lab was set up in 1951 to develop the SAGE air defence system for the US Air Force, which expanded on the earlier research of Project Whirlwind.  The TMRC hackers’ first computer was a TX-0 from the Lincoln Lab with its use of a cathode-ray display borrowed from the SAGE project’s research into radar. Though large by today’s standards, the TX-0 was smaller than Whirlwind and was one of the first transistor-run computers, designed and built at MIT’s Lincoln Lab between 1956-7 (Ceruzzi, 2003, 127). Much of the innovation found in the TX-0 was soon copied in the design of the PDP-1, developed in 1959 by the Digital Equipment Corporation (DEC).

DEC was founded by Ken Olson and Harlan Anderson, two engineers from the Lincoln lab who had also worked on the earlier Whirlwind computer. Watching students at MIT, Olsen had noticed the appeal of the interactive, real time nature of the TX-0 compared to the more powerful but batch operated computers available and saw a commercial opportunity for the TX-0. Soon after they established their firm, they employed Ben Gurley, who had worked with them at the Lincoln Lab and designed the interactive display of the TX-0 which used a cathode-ray tube and light pen. It was Gurley who was largely responsible for the design of the PDP-1. DEC is notable for many technical and organisational innovations, not least that it permitted and encouraged its clients to modify their computers, unlike its competitor, IBM, which still operated on a locked-down leasing model. DEC’s approach was to encourage the use of its machines for innovation, providing “tutorial information on how to hook them up to each other and to external industrial or laboratory equipment.” (Ceruzzi, 2003, 129) This not only appealed to the original TMRC hackers but appealed to many of its customers, too, and led to DEC becoming one of the most successful companies funded by the venture capital company, American Research and Development Corporation (ARD).

The birth of venture capitalism in the university

ARD, established in 1947, is regarded as the first venture capital firm and was “formed out of a coalition between two academic institutions.” (Etzkowitz, 2002, 90). It was founded by the “father of venture capital”, Georges Doriot, then Dean of Harvard Business School, Ralph Flanders, an Engineer and head of the Federal Reserve Bank in Boston, and Karl Compton, President of MIT. ARD employed administrators, teachers and graduate students from both MIT and Harvard. The motivation for setting up this new type of company was a belief by its founders that America’s future economic growth rested on the country’s ability to generate new ideas which could be developed into manufactured goods and therefore generate employment and prosperity. This echoed the argument put forward by Vannevar Bush that following the war, “basic research” should be the basis for the country’s economic growth and both views confirm the idea/ideology that innovation follows a linear process, from basic research which is then applied, developed and later taken into production. However, whereas government was funding large amounts of R&D in universities, the founders of ARD complained of a lack of capital (or rather a model of issuing capital) that could continue this linear process of transferring science to society.

ARD funded DEC after Olsen and Anderson were recommended by Jay Forrester. This led to an investment of $100,000 in equity and $200,000 available in loans and within just a few years DEC was worth $400m. This allowed ARD to take greater risks with its investments: “The huge value of the Digital Equipment stock in ARD’s portfolio meant that the relatively modest profits and losses on most new ventures would have virtually no effect on the venture capital firm’s worth.” (Etzkowitz, 2002, 98). ARD’s success marked the beginning of a venture capital industry that has its origins in the post-war university and a mission to see federally-funded research exploited in the ‘endless frontier’ of scientific progress. It led to the development of a model that many other universities copied by providing “seed” capital investment to technology firms and the establishing of ‘startup’ funds within universities. Most recently, we can observe a variation of this method by the ‘angel investment’ firm, Y-Combinator, which specifically sought to fund recent graduates and undergraduate students during their summer breaks.

Y-Combinator and the valorisation of student hackers

A proper analysis of Y-Combinator in the context of the history of hacking, the university and venture capital is something I hope to pursue at a later date. In this current series of posts discussing the role of the university in the ‘pre-history’ of hacker culture I want to flag up that Y-Combinator can be understood within the context of the university’s role in the venture capital industry. Just as academic staff have been encouraged to commercialise their research through consultancy, patents and seed capital, in its early stage, Y-Combinator sought to valorise the work of students by offering its ‘summer founders programme‘. Similarly its founder, Paul Graham, has often addressed students in his writing and discussed the role of the university experience in bootstrapping a successful start-up company. Graham’s on-going articles provide a fascinating and revealing body of work for understanding the contemporary relationship between students, the university, hacking and venture capital. In this way Y-Combinator represents a lineage of hacking and venture capital that grew out of the university but never truly left because despite recent claims that we are witnessing the demise of higher education as we know it, the university as a knowledge factory remains a fertile source of value through the investment of public money and the production of immaterial labour, something that Vannevar Bush would be proud of.

Series conclusion

This is the last of a series of six posts on the role of the university in the development of hacker culture. These posts are my notes for a journal article I hope to have published soon which will argue, as I have done here, that the pre-history of hacking (pre-1960) is poorly documented and that much of it can be found in an examination of the history of American higher education, especially MIT.

As an academic who works in a ‘Centre for Educational Research and Development’, and who runs various technology projects and works with young developers, I am interested in understanding this work in the context of the trend over the last decade or so, towards ‘openness’ in higher education. Ideas and practices such as ‘open education‘, ‘open access‘, ‘open educational resources‘ (OER) and most recently ‘Massive Open Online Courses’ (MOOCs) and ‘open data‘, are already having a real impact on the form of higher education and its institutions and will continue to do so. My work is part of that trajectory and I recognise that the history of openness in higher education goes back further than the documented last 10-15 years. It is well known that the early efforts around OER, OpenCourseWare and the concurrent development of Creative Commons licenses owes a great deal to the ‘open source’ licensing model developed by early hackers such as Richard Stallman. I hope that in these posts I have shown that in turn, the free and open source software movement(s) was, in its early formation, a product of the political, economic and ultimately institutional conditions of the university. Richard Stallman felt compelled to leave the academy in 1984 as he found that “communism”, a foundational ethos of science as famously described by Merton (1973), was by that time little more than an ideal that had barely existed at MIT since the Great Depression.

This points towards a history of openness in higher education that is rooted in hacker culture and therefore in the commercialisation of scientific research, military funding regimes and the academy’s efforts to promote a positive ideology of science to the public. Stallman’s genius was the development of ‘copyleft‘, in the form of the GPL, which was very influential in the later development of Creative Commons licenses used (and partially developed) in higher education. Through the growth of the free and open source software movements in the last 25 years, the academy has been reminded (and as participants, reminded itself), that the ideal of communism in science forms the basis of a contract with society that can still be achieved through the promotion of openness in all its forms. However, in hindsight, we should be cautious and critical of efforts to yet again valorise this new agenda in science through calls to adopt permissive licenses (e.g. CC-BY, MIT, ODC-by) rather than Stallman’s weapon of scientific communism: Copyleft.

Hacking, war and the university

In each of the posts in this series about the role of the university in the development of hacker culture, I have indicated that central to a history of hacking should be a greater understanding of the role of military research funding. The role of federal funding from government agencies such as the Dept. of Defence looms so large in the history of hacking that I assumed it would be one of the first posts I wrote but I found that in order to understand funding of this type, I had to explore the history of US higher education, in particular the purpose of the Morrill Act and how it led to the development of universities whose remit was initially ‘applied’ scientific research and vocational training, in contrast to the teaching universities of the mid-nineteenth century, such as Harvard and Columbia. The Land Grant universities’ focus on applied science and a mandated responsibility to the development of their local regions led to research activity that became increasingly entrepreneurial over the decades culminating in the development of the Bayh-Dole Act in the late 1970s during a period of economic decline. Similarly, it was economic conditions during the 1920s that led to the development of a model for handling industrial contracts at MIT which was later used for handling federal funding across several universities during WWII (Etzkowitz, 2002; Lowen, 1997).

The defence-funded ‘AI Lab’ where Richard Stallman worked between 1971 and 1984, must be situated within a complex association of projects, people and funding arrangements at MIT that stretches back to the turn of the nineteenth century. The fact that hacker culture at MIT during the 1960s and 70s was wholly reliant on military funding has been acknowledged but not studied in the existing literature on hacking and the extent to which it was a product of university-military-industrial relations is an area for further study.

Before World War II

Federal funding to US universities was not a significant source of research income until the second World War. Lowen (1997) and Etzkowitz (2002) point to the experience of the First World War and then the Great Depression as stimuli for the closer relationship between universities and federal government. MIT President, Karl Compton, and Vannevar Bush, at that time Dean of MIT’s School of Engineering and former Vice President of MIT, were among a group of academics who “were dissatisfied with the military’s use of academic science during World War I”. (Etzkowitz, 2002, 42) This dissatisfaction should be understood in the context of an eventual shift in science policy leadership from agriculturalists to physicists during the inter-war years (Pielke Jr, 2012). Compton and Bush sought to establish an agency under the control of academics that would liaise with the military and transfer their innovations to a future war effort. Around this time, MIT lost the state funding that had originated with its land grant and entered a financial crisis which almost led MIT to become part of Harvard’s engineering school. To avoid this embarrassment, MIT’s leaders made a conscious effort to develop relations with industry and by the 1930s, the Institute had developed policies for patenting and consulting practices, as well as appealing to alumni networks.

In 1919, MIT implemented a ‘Technology Plan’ in an effort to raise the $8m required to save the Institute. As a beneficiary of many MIT graduates, George Eastman (of Eastman and Kodak) provided half of this sum. Yet despite this support, the Technology Plan was only a partial success with interest from other companies dwindling after the initial contracts expired – after all, MIT were now charging for research services they once provided for free to industry. By 1939, Etzkowitz notes, “it was accepted that the Technology Plan was a failure.” (45). However, the legacy of the plan was much greater as it established an office that negotiated research contracts with industry and this was then used as a model for how government transferred funds to MIT and a few other universities during World War II.

War-time government funding

By the time World War II began, leading academics such as Vannevar Bush, who was by then Head of the Carnegie Institute of Washington, had successfully lobbied government to create a federal agency to co-ordinate military research. In contrast to the relatively low position accorded to academic scientists during the First World War, Bush and others sought to place academics at the heart of government policy-making through the establishment of the National Defense Research Committee (NDRC) (1940-1). The composition of this ground-breaking committee was revealing: of the eight original members, four were academics, two were from the military, one from business and another the US Commissioner for Patents, underlining the strategic relationship between government, industry and the academy (see LoC records). The most significant achievement of the NDRC’s short history was the formation of the MIT Radiation Lab (‘Rad Lab’), which developed radar technology during the war. The Rad Lab (1940-45) was shut down at the end of the war, but became the model for future ‘labs’ at MIT and elsewhere, such that there is a ‘genealogy’ of labs (such as the AI Lab), projects (e.g. ‘Project MAC’) and people (like Richard Stallman) that can be traced back to the Rad Lab and the NDRC.

In 1941, the NRDC was superseded by the Office of Scientific Research and Development (OSRD) (1941-7), led by Vannevar Bush. The OSRD was a fully-fledged funding agency for distributing public money to support research at universities. Five universities became the main beneficiaries of this funding during the War: MIT, John Hopkins, Berkeley, Chicago and Columbia, and the OSRD co-ordinated a mass migration of scientists from universities across the country to work at one of these select centres of research.

The increase in research funding during the period of WWII was huge. Mowery et al (2004) show that federal R&D funding went from $784.9m to $12.4bn during the 1940-45 period, more than a fifteen-fold increase (all figures from Mowery et al are in 1996 dollars).  MIT was the largest single recipient ($886m), receiving almost seven times more than Western Electric who were the largest commercial recipient ($130m) (Mowery, 2004, 22). Consequently, the contractual arrangements developed at MIT prior to and during WWII, and the level of funding administered on behalf of the federal government, fundamentally changed the relationship between government and universities. The success of this arrangement led to President Roosevelt requesting Vannevar Bush to draft the famous policy report, Science: The Endless Frontier (1945), where he argued that “basic research” was the basis for economic growth which remains a common though questionable assumption today (Pielke Jr, 2012).

Post-war funding

Despite a brief dip in funding immediately after the war when the OSRD was dissolved and discussions took place over the formation of a new peace-time agency, by 1965 federal funding accounted for 73% of all academic R&D funding to US universities, compared to just 24% in 1935. Post-war funding was dominated by two agencies: defence and health, with military-related funding being split between the Dept. of Defence, NASA and the Dept. of Energy. During the 1960s and 70s “golden age” of hacking at MIT, the overall level of federal funding to universities fluctuated between 73% of all university R&D funding in 1965 to 63% in 1985, by which time a greater percentage of income was being derived from industry, assisted by the Bayh-Dole Act. The Second World War solved MIT’s inter-war financial crisis as Forman has noted:

MIT, on the inside track, emerged from the war with a staff twice as large as it had had before the war, a budget (in current dollars) four times as large, and a research budget ten times as large – 85% from the military services and their nuclear weaponeer, the AEC.

An examination of the funding arrangements for academic R&D during the post-WWI period to the Bayh-Dole Act in 1980 reveals dramatic change, not only in the amount of public money being transferred to universities, but also in the way that academic scientists developed much closer relationships with government and re-conceptualised the idea, practice and purpose of science. A new ideology of science was formed, encapsulated by its chief architect, Vannevar Bush in Science: The Endless Frontier, which redefined the “social contract” between scientists and government and argued for the importance of funding for “basic research”. Throughout these developments, dramatic changes were also taking place in the institutional forms of universities and the movement of academic labour from institution to institution and from research project to research project. So-called ‘labs’, like MIT’s Lincoln Lab were large semi-autonomous organisations in themselves, employing thousands of researchers and assistants. They became the model for later ‘science parks’ and spawned projects and research groups which then became independent ‘labs’ with staff of their own, such as the AI Lab. The University of Stanford learned from this model and it arguably led to the creation of Silicone Valley (Etzkowitz, 2002, Gillmor, 2004).

The AI Lab where Richard Stallman worked from 1971-1984, is legendary in the history of hacking (Levy, 1984). Like many MIT labs, it’s origins can be traced back to the Rad Lab through the Lincoln Lab and Research Laboratory of Electronics (RLE), where some of its personnel formerly worked and developed their thinking around Artificial Intelligence. The AI Lab began as a research group within Project MAC (Multiple Access Computer and Machine-Aided Cognition). Project MAC was set up in 1963 and originally led by Robert Fano, who had worked in the Rad Lab. J.C.R. Licklider, who helped establish the Lincoln Lab and worked at RLE, succeeded Fano as Director of Project MAC in 1968, having worked for DARPA, an agency of the Dept. of Defence, since 1962 and was responsible for the original Project MAC grant. Licklider remained Director of Project MAC until 1971, a year after Marvin Minsky, who worked in Project MAC’s AI research group, led the split to form the AI Lab in 1970, shortly before Stallman arrived as a Research Assistant. In this pre-history of hacker culture, little more needs to be said about the AI Lab as it is well documented in Levy’s book but what I wish to underline is the extent to which the AI Lab and Stallman’s ‘Garden of Eden’ was the strategic outcome of institutional, government and commercial relationships stretching back to the NRDC and the Rad Lab.

A “triple helix” or an “iron triangle”?

To sketch the intertwining history of such labs and projects at MIT alone is not straightforward, and a preliminary effort to do so shows, as one might expect, a great deal of institutional dynamism over the years. As economics conditions and government funding priorities shifted, institutions responded by re-aligning their focus all the while lobbying government and coaxing industry. Etzkowitz refers to this as the ‘triple helix’ of university-industry-government relations and evidence of a “second academic revolution”. Others have been more critical, referring to the “military-industrial-academic complex” – apparently Eisenhower’s original phrase – (Giroux, 2007), and “the “iron triangle” of self-perpetuating academic, industrial and military collaboration.” (Edwards, 1997, referring to Adams, 1982). From every perspective, there is no doubt that these changes gradually took place, spurred on at times by WWII and the Cold War. US universities (and later other national systems of HE) initially incorporated research as a social function of higher education (revolution #1) and then moved to “making findings from an academic laboratory into a marketable product” (revolution #2).  (e.g. Etzkowitz, 1997, 2001) Today, each university such as my own, has an ‘enterprise strategy’, ‘income generation’ targets and various other instruments, which can be traced back to the model that MIT established in the 1920s.

Although the accounts of Etzkowitz and Mowery et al are compelling, they only provide cursory mention of the struggle that has taken place over the years as the university has increased its ties with the military and industry. In particular, such accounts rarely dwell on the opposition and concern within academia to the receipt of large sums of defence funding and the ways in which academics circumvented and subverted their complicit role in this culture. A number of books have been written which do critically examine this ‘second revolution’ or the “iron triangle” (e.g. Edwards, 1997; Leslie, 1993; Heims, 1991; Chomsky et al, 1997; Giroux, 2007; Simpson et al, 1998; Noble, 1977; Turner, 2006; Mindell, 2002; Wisnioski, 2012).

As these critics’ accounts have shown, there has always been a great deal of unease and at times dissent among students and staff at MIT and other universities which were recipients of large amounts of military funding. Although I do not wish to generalise the MIT hackers of the 1960s and 70s as overtly political, they clearly were acting against the constraints of an intensifying managerialism within institutions across the US and in particular the rationalisation of institutional life pioneered by the Engineering profession and its ties with corporate America (Noble, 1977). Hackers’ attraction to time-sharing systems, the ability to personalise computing, programmatic access to the underlying components of computers and the use of computers for leisure activities is characteristic of a sub-culture within the university (Levy, 1985; Wisnioski, 2012) and to some extent the developing counter-culture of that period (Turner, 2006). Such accounts, I think, are vitally important to understanding the development of hacker culture as are the more moderate accounts of federal funding and the development of the entrepreneurial university.

My final post in this series highlights the relationship between venture capital, the university and hacking.

Hacking as critique: In and against

A selected literature review

As I mentioned in my first post of this series, most histories of Hacking begin at MIT in 1961 and make only cursory mention of anything prior to this date. I am interested in what the institutional, political and social conditions were, which gave birth to hacking at that particular time and place. Why MIT? Why 1961? In this series of posts (notes for a journal article), I am focusing on the role of ‘the university’ (i.e. institutionalised academia) in the development of hacker culture. Previously, I suggested that we can take Richard Stallman’s departure from MIT in 1984 as the moment hacker culture became independent from its academic origins and so for two decades, hackers were very much (although not exclusively), part of academic culture and dependent on and subject to the conditions of their institutions. In my last post, I focused on the commercialisation of scientific research and the gradual trend, over many decades, of US universities to valorise their research activity often at the encouragement of government funding agencies. This process took place over a long period as academics and their institutions shifted from an ethos of “communism” or the “communal character of science” (Merton, 1973) to a more entrepreneurial approach to science (Etzkowitz 1998, 2000a, 2000b, 2001, 2002, 2003).

Periods in history do not have clean start and end dates. The conditions which gave rise to moments like the arrival at MIT of the PDP-1 computer (1961) or the departure of Richard Stallman (1984) are, in my view, more important than the mythic “heroes” and “wizards” and “real programmers” if we want to understand why movements and sub-cultures came to exist, why they may have died, and how we can ensure their longevity. Rosenzweig, (1998) provides a useful review of four different approaches to writing the history of the Internet: biographic, bureaucratic, ideological, and social, arguing that

the full story will only be told with a fully contextualised social and cultural history. The rise of the Net needs to be rooted in the 1960s – in both the “closed world” or the Cold War and the open and decentralised world of the antiwar movement and the counterculture. Understanding these dual origins enables us to better understand current controversies over whether the Internet will be “open” or “closed” – over whether the New will foster democratic dialogue or centralised hierarchy, community of capitalism, or some mixture of both (Rosenzweig, 1998, 1531).

Although writing about the history of the Internet and not specifically about hacker culture, the same point can still be made. In my first post, I listed a number of books and articles which discuss hackers and hacking in different ways. Here, I reflect on five of them.

Stephen Levy’s (1984) Hackers. Heroes of the Computer Revolution takes the biographical approach. It is the classic text on hackers and the the only attempt to develop a coherent (albeit brief) history. Its weakness is that it is a journalistic account of those ‘heroes’, making only cursory mention of the institutional, economic and political conditions they were working in. Nevertheless, it is a fascinating account of the motivations of the individuals involved and includes an epilogue which describes the events surrounding the commercialisation of the AI Lab’s Lisp Machines and consequently Stallman’s departure from MIT.

Himanen’s (2001) The Hacker Ethic takes a sociological approach, examining the work of hackers and their values in light of the Protestant work ethic. It is a useful attempt to develop Levy’s chapter on the Hacker Ethic and makes a clear connection between hacker cultures and scientific research culture within academia. However, his description of that academic culture remains inadequate and draws on Merton’s idealised account of the ‘scientific ethos’, which I mentioned in my previous post. As I have already discussed, the outcomes of scientific research have been the objects of proprietary control (patents and licensing), property (copyright) and valorisation since the early twentieth century in the USA. It is the achievement of hackers like Richard Stallman, who subverted these controls with the development of the General Public License, that distinguishes hackers from the scientific culture they grew out of and more recently is forcing the scientific community to re-evaluate the value of “the communal character of science”, as can be seen in the growth of the Open Access movement and recent ‘Science as an open enterprise‘ report.

Tim Jordan’s 2008 book, Hacking, is a short, general introduction to hacker and cracker culture and provides an insightful and useful discussion around hacking and technological determinism. Like Himanen, Tim Jordan is also a sociologist and presents a positive account of hacking as a social and political project. The weakness of Jordan’s book is that is draws largely on literature written by hackers themselves and as such presents them as heroic “warriors” and “hacktivists”, in the same tradition as Levy and Himanen. What makes Jordan’s book particularly valuable is his argument that “hacking both refutes and demands technological determinism”. That is, hackers both promote the idea of technological determinism and provide a critique of that view.

To me, this suggests that hackers work both in and against a society that appears to be determined by technology but provide an example of how that often overwhelming feeling can be challenged and subverted. From this position, hacker culture can be seen as one of the most successful counter-culture movements in recent history, yet one which continues to struggle within a liberal, capitalist world view, dominated by money/value, property and the legal system.

In a similar way, E. Gabriella Coleman’s book, Coding Freedom. The Ethics and Aesthetics of Hacking (2012) is especially useful in identifying hackers and hacking as a liberal critique of liberalism. Coleman’s book is an anthropological study of hackers, in particular the free software hackers of the Debian Linux/GNU operating system and points towards a methodological approach of examining hacker culture and other counter-cultures that are ‘in and against’ a dominant discourse. One particular instrument that hackers employ is Stallman’s ‘copyleft‘ GPL license, which uses the existing law of copyright against itself. Similarly, Creative Commons and the Free Culture movement extend this approach beyond the software domain to all cultural artefacts. By examining the hacker culture in this way, we can reveal its limits and the opportunities that the movement presents within liberal capitalist society.

Johan Soderberg’s (2008) Hacking Capitalism is a study of hacking as a political project. In the first chapter, Soderberg offers a ‘background of the hacker movement’ but only briefly mentions the ‘pre-history’ which I am concerned with. He rightly mentions the development of the telephone infrastructure, Norbert Wiener’s theory of Cybernetics and its application in war-time funded research projects, which would eventually go on to develop the Internet. He also identifies the anti-war and appropriate technologies movements as examples of how  personal computing grew out of 1960s counter-culture (Turner and Markoff provide full accounts of this). However, much of Soderberg’s book is an examination of hacking using the categories of Marx’s critique of political economy (class, value, labour, commodities, etc.). In doing so, it is the only book-length study of hacking which attempts to methodologically examine hacking from the point of view of a critique of liberalism, rather than starting from a naturalised liberal understanding of categories such as property, work, production and exchange. For this reason, it is an important book (in need of a good editor!).

This very brief survey of five key books about hacker culture demonstrates that Rosenzweig’s remark about histories of the Internet can equally be applied to hacking. Taken together, they reveal that in addition to the substantive body of biographical, social and institutional history, the history of hacking can be approached methodologically in two critically different ways: The first (embodied in Levy and Himanen’s books) offers a view of hacker culture from a liberal perspective. Despite being mischievous, playful and meritocratic it’s ethic is grounded in laissez-faire liberal ideals of property, markets and freedom. The conclusion to Jordan’s book offers a methodological bridge to that which Coleman develops more broadly and Soderberg develops more fully. That is, a study of hacker culture can reveal to us an immanent critique of liberal capitalism: it is a culture that is both in and against; it is complicit but points to a way out through the development of intellectual and practical tools such as Copyleft and the sharing and co-production of open source software. The development of this more critical approach to the study of hackers and hacking is overdue and should result in a much stronger defence of free software and hacker culture as it is increasingly incorporated and subsumed into neo-liberal policy and methods of valorisation.

My next post in this series will be about hackers and war.

Hacking and the commercialisation of scientific research

I began this series of blog posts (my notes for a journal article), first outlining what might be considered the ‘flight of hackers’ from the university in the early 1980s, with the aim to then work backwards and establish the role of ‘the university’ (e.g. academia) in the development of hacker culture.  My second post began to focus on the role of MIT in particular, as a model of the ‘entrepreneurial university’ which other US universities copied and the generalisation of this model through the Bayh-Dole Act in 1980s. Next, I had intended to move on to discuss the role of military funding which underwrote the AI lab at MIT where Richard Stallman, “the last of the true hackers”, worked (Levy, 1984). However, I will leave that blog post for another day as there is more to say on the commercialisation of scientific research up to 1980, which I would argue played a significant role in the birth of hacking in academia (often regarded as 1961) and its agonising split when Stallman left his ‘Garden of Eden’ at MIT in 1984.

Until now, I have been drawing heavily on the work of Etzkowitz (2004), who has written about the rise of ‘entrepreneurial science’ at MIT and then Stanford. He draws upon the work of Mowery et al (2004) who provide an excellent account of the growth of patenting up to and in light of the Bayh-Dole Act. My interest is in their discussion of patenting prior to the 1980 Act, just four years before Stallman left MIT. As I wrote in my previous post, Stallman does not think that the Bahy-Dole Act had a direct impact on the “software war of 1982/83”, which makes sense in light of both Etzkowitz’s and Mowery’s accounts. By the time of the Bayh-Dole Act, MIT had been gradually internalising the commercialisation of its academic endeavours for decades, as had many other large research universities in the US, and Mowery concludes that the effect of the Act has been “exaggerated”  and that “much of the post-1980 upsurge in university patenting and licensing, we believe, would have occurred without the Act and reflects broader developments in federal policy and academic research.”

In this post, I want to highlight those broader developments in order to provide a richer account of the development of hacker culture, which although took flight from the university in 1984, has very much returned in the last decade with the growth of the ‘openness’ agenda and the development of initiatives such as open education, OER, MOOCs and open data.

Of course, hackers never left the university entirely, but the early 1980s does seem to mark a point where hacker culture assumed an independence from academic culture, a division we might relate to the later tension between ‘free software’ and ‘open source’ hackers. This tension between ‘freedom’ and ‘openness’ has been described by Stallman as a conflict in emphasis between the “ideas of freedom, community, and principle” (free software) and “the potential to make high quality, powerful software” (open source). Although the free software hackers have never wholly shunned the support of business, it is clear that Stallman believes the primary focus should be a moral and ethical one and that an emphasis on business concerns “can be disastrous” to the ideals of the free software movement.

This value-based conflict over the relationship between hackers and business is also found among academics today, with some resisting the gradual move to an ‘entrepreneurial university’ model, while others welcome it (see Etzkowitz 1998, 2000a, 2000b, 2001, 2002, 2003). In the US, the rise of ‘entrepreneurial science’ can be traced right back to the founding of the Land Grant universities, which I mentioned in my earlier post. Here, I want to focus specifically on the key instrument by which the commercialisation of science takes place: patents and their use in ‘technology transfer’ to industry. I should note that terms such as ‘entrepreneurial university’ and ‘technology transfer’ are not value-free and through discussing their historical development we might subject the development of hacker culture to a similar critique that Slaughter and Leslie have applied to ‘Academic Capitalism‘. In this post, I am developing the basis for that critique.

Patents as public good

Chapters 2-4 of Mowery’s book covers the history of patenting by US universities in great detail, pointing to the Morrill Act (1862) and the remit of Land Grant universities to serve their local regions by supporting agriculture and engineering (the ‘mechanical arts’). The book’s authors point to key “structural characteristics” of US higher education which laid the groundwork for later commercialisation of scientific research. First, with the introduction of the land grants, US high education has been notable for its scale and the autonomy of its institutions, devolving the responsibility of administering federal funds to the respective state governments. However, this autonomy came with a keenly felt responsibility to the local region and the founders and later Presidents of Land Grant universities, like MIT, understood their obligation to meet the needs of their local communities. This is evident in the land grant universities’ “utilitarian orientation to science” (10) and tendency to provide vocational education, combining training with research in methods to improve agriculture (12). Finally, US higher education was characterised by “the emergence of a unified national market for faculty at US research universities.” (13) Compared to other national systems of higher education, the departmental structures of US universities and the corresponding division into disciplinary degree programmes meant that academics focused on their contribution to their discipline over and above their institution. This resulted in a greater inter-institutional movement among academics and therefore a greater diffusion of ideas and research practices. Combined with the tendency to applied science and vocational education, this also led to a “rapid dissemination of new research findings into industrial practice – the movement of graduates into industrial employment.” (13) Mowery argues that these characteristics of US higher education

“created powerful incentives for university researchers and administrators to establish close relationships with industry. They also motivated university researchers to seek commercial applications for university developed inventions, regardless of the presence or absence of formal patent protection.” (13).

In effect, the discipline of engineering and the practice of applied science became institutionalised within US higher education, with MIT, founded in 1865, being one of the first universities to offer engineering courses. By offering its first electrical engineering course in 1882, “schools like MIT had become the chief suppliers of electrical engineers” (p 15, Mowery quoting Wildes and Lingren) in the US by the 1890s, meeting a national need by an emerging electricity-based industries. I will address the growth of Engineering as a discipline and the political tension within the discipline in a later post as it seems to me that a counter-culture among Engineers can be found in hackers today.

The moral dilemma that Stallman faced during the “software wars of 1982/83” is familiar to many academics and the “patent problem” has been the subject of much heated debate throughout the history of the modern university (see Mowery, ch. 3). In the US, although universities have worked in collaboration with industry since the founding of the land grant institutions, they remained sensitive to the handling of commercial contracts until the early 1970s, when the commercialisation of science was internalised in the structures and processes of university research administration. Debates often focused around the pros and cons of patenting inventions derived from research, with some academics believing that patents were necessary to protect the reputation of the institutions, for fear that the invention might be “wrongfully appropriated” by a “patent pirate” (Mowery, p. 36). Thus, the argument for patenting research inventions was based on the necessity of ‘quality control’, thereby preventing the “incompetent exploitation of academic research that might discredit the research results and the university.” (37). This view saw patents as a way to “enhance the public good” and “advance social welfare” by protecting the invention from “pirates” who might otherwise patent the invention themselves and charge extortionate prices. Within the early pre-WWII debates around the use of patents by US universities, it was this moral argument of protecting a public good that led to patents being licensed widely and for low or no royalties. In fact, the few universities that began to apply for patents on their inventions did so through the Research Corporation, rather than directly themselves, so as to publicly demonstrate that their work was not corrupted by money.

The Research Corporation

The Research Corporation (see Mowery, ch. 4) was established by Frederick Cottrell of the University of California at Berkeley in 1912. Cottrell had received six patents for his work on the electrostatic precipitator and felt strongly that his research would receive more widespread utility if it were patented than if it were provided to the public for free. His view was that research placed in the public domain was not exploited effectively: “what is everybody’s business is nobody’s business.” (Mowery, quoting Cottrell, p. 59). However, Cottrell did not wish to involve university administrators in the management of the patents as he also believed that this would set a dangerous precedent of too closely involving non-academics in the scientific endeavours of researchers. He worried that it would place an expectation on academics to continue to produce work of commercial value, increasing the “possibility of growing commercialism and competition between institutions and an accompanying tendency for secrecy in scientific work.” (Mowery, quoting Cottrell, p. 60)

Cottrell’s intentions appear to have been sincere. He was not interested in any significant personal accumulation of wealth derived from his patents and believed that the scientific endeavour and the public would benefit from the protection given by patents, but they required an independent organisation to manage them. Cottrell founded the Research Corporation to meet these beliefs and donated his patents to the Corporation in the form of an endowment to manage and re-distribute income received as research grants. Cottrell regarded the formation of the Research Corporation as “a sort of laboratory of patent economics” and from its inception, states Mowery, “he envisioned the Research Corporation as an entity that would develop and disseminate techniques for managing intellectual property of research universities and similar organisations.” (60)

During its 70 year history, this “laboratory of patent economics” found it difficult to sustain its activity, despite a number of changes in approach. In its early pre-WWII period, it was an incubator for commercial applications of Cottrell’s patents, employing 45 engineers within the first five years, who not only designed applications for the use of precipitators, but installed them for clients, too. When Cottrell’s endowment to the Corporation began to run out, the organisation looked to researchers in other technology fields to donate their inventions. In effect, it seems the Corporation began to acquire patents so as they could afford to keep managing existing patents with dwindling returns and continue its philanthropic mission. The Research Corporation attracted a number of donations of patents from researchers with similar philanthropic agendas, as Mowery notes:

The expanding research collaboration between US universities and industry and the related growth of science-based industry increased the volume of commercially valuable academic research in the 1920s and 1930s, resulting in more and more requests from academic inventors to the Research Corporation for assistance in the management of patenting and licensing. (62)

So, for the first couple of decades of the Corporation, much of the income which sustained the organisation came from its work relating to Cottrell’s original precipitator inventions. As these revenues decreased, the Research Corporation looked for other sources of income. This coincided with the Great Depression and a time when universities were struggling to remain solvent, which was the situation at MIT. Rather than merge with Harvard, its President, Karl Compton, charged Vannevar Bush, then Dean of MIT’s School of Engineering, with developing a patent policy for the university. With this, MIT asserted an institutional claim on any invention resulting from research funded by the university. However, the patent committee recommended that MIT should be relieved “of all responsibility in connection with the exploitation of inventions while providing for a reasonable proportionate return to the Institute in all cases in which profit shall ensue.” (Mowery, 64) To undertake this, MIT drew up an ‘Invention Administration Agreement’ (IAA) with the Research Corporation, which not only created a precedent for other universities, but also marked a clear shift from the individual ownership of research inventions, many of which were donated to the Corporation by philanthropic academics, to institutional ownership, which anticipated an income from that research (a 60/40 split between MIT and the Corporation). As a result, Cottrell’s original vision of creating an independent charitable organisation that turned patent income into grants for further scientific work, had to meet the challenges of the Depression and the the unpredictable nature of successfully exploiting research.

MIT institutionalised the relationship with Research Corporation, using it to exclusively manage its patents from 1937 to 1946, eventually cancelling its contract with the Corporation in 1963, by which time concerns about directly managing the commercial exploitation of its research had largely disappeared and the in-house skills to undertake the necessary administration had been developed over the course of their relationship with the Research Corporation. The partnership between MIT and the Research Corporation was never very profitable, with the Corporation making net losses during the decade that it exclusively managed MIT’s patents. However, during and following WWII, the scale of research activity in US universities markedly increased. Mowery notes that

the expansion of military and biomedical research conducted in US universities during and after the war had increased the pool of potentially patentable academic inventions, and federal funding agencies compelled universities to develop formal patent policies during the early post-war period. The Research Corporation negotiated IAAs, modelled on the MIT agreement, with several hundred other US universities during the 1940s and 1950s. (66)

The history of the Research Corporation, as told by Mowery et al, is a fascinating one, pointing to the difficulties in successfully commercialising research through the licensing of patents. During 1945 to 1980 the top five patents held by the Corporation accounted for the majority of its income and “although its portfolio of institutional agreements, patents, and licenses grew during the 1950s, growth in net revenues proved elusive.” (69)

The latter years of the Research Corporation were spent trying to build relationships with university staff in an effort to develop the necessary skills to identify potentially commercial inventions across different research disciplines. Ironically, in its attempt to off-load some of the administrative costs to institutions the Corporation effectively trained university administrators to manage without its assistance, eroding the competitive advantage that the Corporation previously held. During the 1970s, universities were also ‘cherry picking’ inventions to patent themselves, rather than the Research Corporation, in an effort to benefit from all of the potential revenue rather than a cut of it. “The Research Corporation’s 1975 Annual Report noted that many universities were beginning to withhold valuable inventions.” (Mowery, 77) This can be seen as a clear indication that the earlier concerns about universities directly exploiting their research had been largely overcome, and that during the 1960s and 1970s, the institutional structures and skills within the larger research universities like MIT, had been put in place, partly with the assistance of the Research Corporation.

Conclusion

The institutionalised commercialisation of research at MIT began in the 1930s, when MIT had developed one of the first university patent policies, clearly indicating that the institution had a claim to the profits deriving from its research activity. Richard Stallman joined the DARPA-funded AI Lab at MIT as a Research Assistant in 1971, eight years after MIT had cancelled its Agreement with the Research Corporation and fully internalised the process of identifying and managing patents. In this respect, MIT was at the forefront of a movement among US universities to systematically commercialise their research – to engage in ‘entrepreneurial science’ – where research groups are run as de facto firms (Etkowitz 2003). The military-funded work in Artificial Intelligence during the 1970s, which Stallman contributed to, can be understood within the context of the academy’s role in supporting the Cold War effort (Leslie, 1994; Chomsky et al, 1997; Simpson, 1998). This programme of funded research across a number of disciplines consequently increased the number of commercial opportunities (‘technology transfers’), not least in the fields of electronics, engineering and the emerging discipline of computer science. Indeed, Symbolics, the company which was spun off from the AI Lab in the early 1980s, attracting most of Stallman’s fellow hackers, produced Lisp Machines for the Cold War military market in Artificial Intelligence, eventually going bust when the Cold War ended.

My point in discussing the rise in the use of patents to exploit government funded research in US universities during the twentieth century is to show how the split that took place in the AI Lab in the early 1980s, devastating Stallman and compelling him to leave, was a result of a long process of US universities, led by MIT, internalising the idea, skills and processes by which to make money from research. Just as the development of Land Grant universities and the practice of applied science, patronised by vast sums of government funding, gave birth to hacker culture in the early 1960s, that culture remained tied to structural changes taking place within US higher education during the 1960s and 1970s and a shift towards entrepreneurialism. Stallman’s ‘Garden of Eden’ was, I think, always going to be a short-lived experience as he joined MIT at the beginning of a decade where government funding from the three defence, space and energy agencies was on the decline from a peak of 80% of all federal funding in 1954 to 30% in 1970. As funding in these areas was on the decline and the licensing of patents and overall share of research funding coming from industry was on the rise (see Mowery et al 23-27), it seems inevitable that the institution which had given birth to hacking in the early 1960s would try to valorise the work of these researchers as optimally as it could. Stallman has said that he and his colleagues did not object to the commercialisation of their work, but the instruments of this advancing entrepreneurialism (patents, copyright, licenses) were at odds with at least one of the long established “institutional imperatives” of scientific practice: “Communism” (Merton, 1973).

In a sincere yet novel way, Frederick Cottrell recognised this in 1912, when he established the Research Corporation as a charity and donated his patents so as to benefit public social welfare and provide philanthropic grants for further scientific work. However, twenty years later, in the midst of the Depression, MIT asserted institutional interest in the ‘intellectual property’ of its researchers and sought a majority cut of the income deriving from its patents. It took a further three decades or so for MIT to relinquish the use of the Research Corporation altogether and fully institutionalise the commercial exploitation of scientific research. Writing in 1973, Merton’s “communism” as a foundation of the scientific ethos seems both an ironic use of the term given that most scientific research in the US was being funded through the Cold War agencies, and removed from the reality of what was happening within institutions as they advanced ‘entrepreneurial science’. Merton understood this, and his description of the “communal character of science” (Merton, 274) surely refers more to a liberal ideal than actual practice, just as Stallman’s characterisation of ‘freedom’ draws heavily on liberal political philosophy but is continuously confronted with the reality of capitalist valorisation. A blog post for another day…

Critical Open Education Studies

I don’t know what to make of David Wiley’s latest blog post ‘MOOCs and Digital Diploma Mills: Forgetting Our History‘. I am astonished, to put it politely, that one of the leading thinkers in the Open Education movement is still sitting on the fence, despite having written about the proletarianisation of teaching in his early work. Of course David Noble’s critique of distance education (later expanded in a book) is applicable to open education. Noble’s concern about “the systematic conversion of intellectual activity into intellectual capital and, hence, intellectual property” is not remedied with the simple application of a Creative Commons license – if only it were that easy. Many academics are already free to choose how to distribute their scholarly work (this simplifies the traditional transferral of copyright to the journal publisher resulting in more effective impact of the institution’s outputs), but what Noble was concerned with was the systematic interference by institutions in the re-production of teaching and learning, which is what xMOOCs are undertaking. xMOOCs are capturing value in teaching and learning where it was previously shared at the whim and will of the individual teacher. In choosing to license content under a CC license, such institutions are converting an under-commons into a valorised commons.

David Noble died in 2010 and did not revise his views about distant education in light of the open education movement. I suggest that this is because his argument remains apposite for OER-based teaching and learning, too. The content may be ‘free’, but the teacher is drawn further into the valorisation process of the institution.

As can be seen by David Wiley’s significant number of articles relating to open education, the movement has had over a decade to reflect critically on itself, yet there remains a void of reflexive, critical work that attempts to develop the open education movement and protect it from threats such as those which Wiley draws out of Noble’s article. There is no doubt that the work of David Wiley and others to advocate open education and grow the movement is a sincere and important contribution to a notion of the ‘public good’, but the movement still remains largely inward looking and self-referential. It is dominated by learning developers and technologists who are necessarily focused on implementation and have little time, motivation or opportunity for critique.

Where are the scholarly papers that examine open education from the range of disciplines within the social sciences? What has open education demonstrably learned from the tradition of popular and critical pedagogy? How have the critiques of open source been applied to OER? Similarly, the movement has much to learn from critiques of P2P, but where is this critical, scholarly dialogue occurring?

In the UK, the OER movement has been tightly coupled to state (JISC/HEA) funding, which has now ended. I was the recipient of two grants in this programme of funding. The synthesis evaluation of #ukoer clearly presents the instrumental agenda of the programme. Related conferences are mostly one project after another attempting to demonstrate their ‘progress’ with robust critique almost entirely absent. My experience at Open Education 2010 was the same. Academics seeking funding are naturally keen to satisfy the expectations of their funders and the effect that these funding programmes have had on the fundamental direction of open education in the UK has yet to be critically examined. What would open education look like if we hadn’t taken the money? Similarly, in the US, state funding and, to a greater extent, philanthropic funding has steered how the movement has developed. Funding is provided based on the premise of open education’s public good and we feel obliged to demonstrate this. There is a history of state funding influencing the outputs of academia, what effect has it had on us?

Clearly if this work is being undertaken, I have not found it, and so I am hoping that others will join me in reviewing the existing critical literature so that we might identify what has been written and therefore what needs to be done. I have made three contributions in the last two years. The first paper, with Richard Hall, addressed the potential role of open education to sustain higher learning. The second, was a critique of the valorisation of institutional OER. The third, was a paper with Mike Neary, which critiqued the idea of ‘commons’. While I am trying to develop a critique of open education within the framework of a critique of political economy, I know that approaches from other disciplines within the social sciences will prove insightful and fruitful, too. My next paper will be a critical examination of the historical role of academia in the development of hacker culture (and therefore notions of ‘freedom’ and ‘openness’ that have returned to the university via the success of Creative Commons). I think much remains to be done to uncover the historical forces, structures and conditions that gave rise to open education. Without this, how can we understand ourselves?

I have looked for literature reviews of open education and found little that satisfies my requirements for texts that are critiques of open education. For example, Mendeley groups point to the usual plethora of blog posts, news articles, reports and project outcomes. A Google Scholar search is not encouraging either. In the apparent absence of ‘critical open education studies’, I hope that you will recommend papers that offer David and I robust critiques of open education, OER and related practices. I think that the development of this area of scholarship would demonstrate the maturity of the movement and protect it from manipulation, co-option and coercion in the future.