Hackers, War and Venture Capital

In my previous post of this series, I discussed the role of military funding in the formation of a ‘genealogy’ of university laboratories, their projects, and staff which produced the conditions for hacking during the 1960s and 70s. As I drafted that post, I found myself drifting into a discussion around the role of venture capital but I have split that discussion into this final post below so as to highlight another important aspect in the study of the role of the university in the development of hacker culture.

Levy (1985) points to the arrival in 1959 of the TX-0 computer as a seminal moment in the history of hacking. The computer had been donated by the Lincoln Laboratory to MIT’s Research Laboratory of Electronics (RLE), the original successor of the Rad Lab and today, “MIT’s leading entrepreneurial interdisciplinary research organization.” Similarly, Eric Raymond points to the arrival at the RLE of the PDP-1 computer in 1961 as the moment that defined the beginning of ‘hackerdom’. Notably, at that time the RLE shared the same building as the Tech Model Railroad Club (TMRC), the legendary home of the first hackers. The history of hacking is understandably tied to the introduction of machines like the TX-0 and PDP-1 just as Richard Stallman refers to the demise of the PDP-10 as “the last nail in the coffin” for 15 years of work at MIT. Given the crucial significance of these machines, a history of hacking should include a history of key technologies which excited and enabled those students and researchers to hack at MIT in the early 1960s. To some extent, Levy’s book achieves this. However, in undertaking a history of machines, we necessarily undertake a social history of technology and the institutions and conditions which reproduced its development and in doing so we reveal the social relations of the university, the state and industry (Noble, 1977, 1984).

The birth of Digital Equipment Corporation

In 1947, the US Navy funded MIT’s Servomechanisms Lab to run Project Whirlwind to develop a computer that tracked live radar data. The Whirlwind project was led by Jay Forrester, leading systems theorist and principle inventor of magnetic core memory (the patenting of which was marked by a dispute between MIT and the Research Corporation resulting in the cancellation of MIT’s contract with the Corporation).

MIT’s Lincoln Lab was set up in 1951 to develop the SAGE air defence system for the US Air Force, which expanded on the earlier research of Project Whirlwind.  The TMRC hackers’ first computer was a TX-0 from the Lincoln Lab with its use of a cathode-ray display borrowed from the SAGE project’s research into radar. Though large by today’s standards, the TX-0 was smaller than Whirlwind and was one of the first transistor-run computers, designed and built at MIT’s Lincoln Lab between 1956-7 (Ceruzzi, 2003, 127). Much of the innovation found in the TX-0 was soon copied in the design of the PDP-1, developed in 1959 by the Digital Equipment Corporation (DEC).

DEC was founded by Ken Olson and Harlan Anderson, two engineers from the Lincoln lab who had also worked on the earlier Whirlwind computer. Watching students at MIT, Olsen had noticed the appeal of the interactive, real time nature of the TX-0 compared to the more powerful but batch operated computers available and saw a commercial opportunity for the TX-0. Soon after they established their firm, they employed Ben Gurley, who had worked with them at the Lincoln Lab and designed the interactive display of the TX-0 which used a cathode-ray tube and light pen. It was Gurley who was largely responsible for the design of the PDP-1. DEC is notable for many technical and organisational innovations, not least that it permitted and encouraged its clients to modify their computers, unlike its competitor, IBM, which still operated on a locked-down leasing model. DEC’s approach was to encourage the use of its machines for innovation, providing “tutorial information on how to hook them up to each other and to external industrial or laboratory equipment.” (Ceruzzi, 2003, 129) This not only appealed to the original TMRC hackers but appealed to many of its customers, too, and led to DEC becoming one of the most successful companies funded by the venture capital company, American Research and Development Corporation (ARD).

The birth of venture capitalism in the university

ARD, established in 1947, is regarded as the first venture capital firm and was “formed out of a coalition between two academic institutions.” (Etzkowitz, 2002, 90). It was founded by the “father of venture capital”, Georges Doriot, then Dean of Harvard Business School, Ralph Flanders, an Engineer and head of the Federal Reserve Bank in Boston, and Karl Compton, President of MIT. ARD employed administrators, teachers and graduate students from both MIT and Harvard. The motivation for setting up this new type of company was a belief by its founders that America’s future economic growth rested on the country’s ability to generate new ideas which could be developed into manufactured goods and therefore generate employment and prosperity. This echoed the argument put forward by Vannevar Bush that following the war, “basic research” should be the basis for the country’s economic growth and both views confirm the idea/ideology that innovation follows a linear process, from basic research which is then applied, developed and later taken into production. However, whereas government was funding large amounts of R&D in universities, the founders of ARD complained of a lack of capital (or rather a model of issuing capital) that could continue this linear process of transferring science to society.

ARD funded DEC after Olsen and Anderson were recommended by Jay Forrester. This led to an investment of $100,000 in equity and $200,000 available in loans and within just a few years DEC was worth $400m. This allowed ARD to take greater risks with its investments: “The huge value of the Digital Equipment stock in ARD’s portfolio meant that the relatively modest profits and losses on most new ventures would have virtually no effect on the venture capital firm’s worth.” (Etzkowitz, 2002, 98). ARD’s success marked the beginning of a venture capital industry that has its origins in the post-war university and a mission to see federally-funded research exploited in the ‘endless frontier’ of scientific progress. It led to the development of a model that many other universities copied by providing “seed” capital investment to technology firms and the establishing of ‘startup’ funds within universities. Most recently, we can observe a variation of this method by the ‘angel investment’ firm, Y-Combinator, which specifically sought to fund recent graduates and undergraduate students during their summer breaks.

Y-Combinator and the valorisation of student hackers

A proper analysis of Y-Combinator in the context of the history of hacking, the university and venture capital is something I hope to pursue at a later date. In this current series of posts discussing the role of the university in the ‘pre-history’ of hacker culture I want to flag up that Y-Combinator can be understood within the context of the university’s role in the venture capital industry. Just as academic staff have been encouraged to commercialise their research through consultancy, patents and seed capital, in its early stage, Y-Combinator sought to valorise the work of students by offering its ‘summer founders programme‘. Similarly its founder, Paul Graham, has often addressed students in his writing and discussed the role of the university experience in bootstrapping a successful start-up company. Graham’s on-going articles provide a fascinating and revealing body of work for understanding the contemporary relationship between students, the university, hacking and venture capital. In this way Y-Combinator represents a lineage of hacking and venture capital that grew out of the university but never truly left because despite recent claims that we are witnessing the demise of higher education as we know it, the university as a knowledge factory remains a fertile source of value through the investment of public money and the production of immaterial labour, something that Vannevar Bush would be proud of.

Series conclusion

This is the last of a series of six posts on the role of the university in the development of hacker culture. These posts are my notes for a journal article I hope to have published soon which will argue, as I have done here, that the pre-history of hacking (pre-1960) is poorly documented and that much of it can be found in an examination of the history of American higher education, especially MIT.

As an academic who works in a ‘Centre for Educational Research and Development’, and who runs various technology projects and works with young developers, I am interested in understanding this work in the context of the trend over the last decade or so, towards ‘openness’ in higher education. Ideas and practices such as ‘open education‘, ‘open access‘, ‘open educational resources‘ (OER) and most recently ‘Massive Open Online Courses’ (MOOCs) and ‘open data‘, are already having a real impact on the form of higher education and its institutions and will continue to do so. My work is part of that trajectory and I recognise that the history of openness in higher education goes back further than the documented last 10-15 years. It is well known that the early efforts around OER, OpenCourseWare and the concurrent development of Creative Commons licenses owes a great deal to the ‘open source’ licensing model developed by early hackers such as Richard Stallman. I hope that in these posts I have shown that in turn, the free and open source software movement(s) was, in its early formation, a product of the political, economic and ultimately institutional conditions of the university. Richard Stallman felt compelled to leave the academy in 1984 as he found that “communism”, a foundational ethos of science as famously described by Merton (1973), was by that time little more than an ideal that had barely existed at MIT since the Great Depression.

This points towards a history of openness in higher education that is rooted in hacker culture and therefore in the commercialisation of scientific research, military funding regimes and the academy’s efforts to promote a positive ideology of science to the public. Stallman’s genius was the development of ‘copyleft‘, in the form of the GPL, which was very influential in the later development of Creative Commons licenses used (and partially developed) in higher education. Through the growth of the free and open source software movements in the last 25 years, the academy has been reminded (and as participants, reminded itself), that the ideal of communism in science forms the basis of a contract with society that can still be achieved through the promotion of openness in all its forms. However, in hindsight, we should be cautious and critical of efforts to yet again valorise this new agenda in science through calls to adopt permissive licenses (e.g. CC-BY, MIT, ODC-by) rather than Stallman’s weapon of scientific communism: Copyleft.

Hacking, war and the university

In each of the posts in this series about the role of the university in the development of hacker culture, I have indicated that central to a history of hacking should be a greater understanding of the role of military research funding. The role of federal funding from government agencies such as the Dept. of Defence looms so large in the history of hacking that I assumed it would be one of the first posts I wrote but I found that in order to understand funding of this type, I had to explore the history of US higher education, in particular the purpose of the Morrill Act and how it led to the development of universities whose remit was initially ‘applied’ scientific research and vocational training, in contrast to the teaching universities of the mid-nineteenth century, such as Harvard and Columbia. The Land Grant universities’ focus on applied science and a mandated responsibility to the development of their local regions led to research activity that became increasingly entrepreneurial over the decades culminating in the development of the Bayh-Dole Act in the late 1970s during a period of economic decline. Similarly, it was economic conditions during the 1920s that led to the development of a model for handling industrial contracts at MIT which was later used for handling federal funding across several universities during WWII (Etzkowitz, 2002; Lowen, 1997).

The defence-funded ‘AI Lab’ where Richard Stallman worked between 1971 and 1984, must be situated within a complex association of projects, people and funding arrangements at MIT that stretches back to the turn of the nineteenth century. The fact that hacker culture at MIT during the 1960s and 70s was wholly reliant on military funding has been acknowledged but not studied in the existing literature on hacking and the extent to which it was a product of university-military-industrial relations is an area for further study.

Before World War II

Federal funding to US universities was not a significant source of research income until the second World War. Lowen (1997) and Etzkowitz (2002) point to the experience of the First World War and then the Great Depression as stimuli for the closer relationship between universities and federal government. MIT President, Karl Compton, and Vannevar Bush, at that time Dean of MIT’s School of Engineering and former Vice President of MIT, were among a group of academics who “were dissatisfied with the military’s use of academic science during World War I”. (Etzkowitz, 2002, 42) This dissatisfaction should be understood in the context of an eventual shift in science policy leadership from agriculturalists to physicists during the inter-war years (Pielke Jr, 2012). Compton and Bush sought to establish an agency under the control of academics that would liaise with the military and transfer their innovations to a future war effort. Around this time, MIT lost the state funding that had originated with its land grant and entered a financial crisis which almost led MIT to become part of Harvard’s engineering school. To avoid this embarrassment, MIT’s leaders made a conscious effort to develop relations with industry and by the 1930s, the Institute had developed policies for patenting and consulting practices, as well as appealing to alumni networks.

In 1919, MIT implemented a ‘Technology Plan’ in an effort to raise the $8m required to save the Institute. As a beneficiary of many MIT graduates, George Eastman (of Eastman and Kodak) provided half of this sum. Yet despite this support, the Technology Plan was only a partial success with interest from other companies dwindling after the initial contracts expired – after all, MIT were now charging for research services they once provided for free to industry. By 1939, Etzkowitz notes, “it was accepted that the Technology Plan was a failure.” (45). However, the legacy of the plan was much greater as it established an office that negotiated research contracts with industry and this was then used as a model for how government transferred funds to MIT and a few other universities during World War II.

War-time government funding

By the time World War II began, leading academics such as Vannevar Bush, who was by then Head of the Carnegie Institute of Washington, had successfully lobbied government to create a federal agency to co-ordinate military research. In contrast to the relatively low position accorded to academic scientists during the First World War, Bush and others sought to place academics at the heart of government policy-making through the establishment of the National Defense Research Committee (NDRC) (1940-1). The composition of this ground-breaking committee was revealing: of the eight original members, four were academics, two were from the military, one from business and another the US Commissioner for Patents, underlining the strategic relationship between government, industry and the academy (see LoC records). The most significant achievement of the NDRC’s short history was the formation of the MIT Radiation Lab (‘Rad Lab’), which developed radar technology during the war. The Rad Lab (1940-45) was shut down at the end of the war, but became the model for future ‘labs’ at MIT and elsewhere, such that there is a ‘genealogy’ of labs (such as the AI Lab), projects (e.g. ‘Project MAC’) and people (like Richard Stallman) that can be traced back to the Rad Lab and the NDRC.

In 1941, the NRDC was superseded by the Office of Scientific Research and Development (OSRD) (1941-7), led by Vannevar Bush. The OSRD was a fully-fledged funding agency for distributing public money to support research at universities. Five universities became the main beneficiaries of this funding during the War: MIT, John Hopkins, Berkeley, Chicago and Columbia, and the OSRD co-ordinated a mass migration of scientists from universities across the country to work at one of these select centres of research.

The increase in research funding during the period of WWII was huge. Mowery et al (2004) show that federal R&D funding went from $784.9m to $12.4bn during the 1940-45 period, more than a fifteen-fold increase (all figures from Mowery et al are in 1996 dollars).  MIT was the largest single recipient ($886m), receiving almost seven times more than Western Electric who were the largest commercial recipient ($130m) (Mowery, 2004, 22). Consequently, the contractual arrangements developed at MIT prior to and during WWII, and the level of funding administered on behalf of the federal government, fundamentally changed the relationship between government and universities. The success of this arrangement led to President Roosevelt requesting Vannevar Bush to draft the famous policy report, Science: The Endless Frontier (1945), where he argued that “basic research” was the basis for economic growth which remains a common though questionable assumption today (Pielke Jr, 2012).

Post-war funding

Despite a brief dip in funding immediately after the war when the OSRD was dissolved and discussions took place over the formation of a new peace-time agency, by 1965 federal funding accounted for 73% of all academic R&D funding to US universities, compared to just 24% in 1935. Post-war funding was dominated by two agencies: defence and health, with military-related funding being split between the Dept. of Defence, NASA and the Dept. of Energy. During the 1960s and 70s “golden age” of hacking at MIT, the overall level of federal funding to universities fluctuated between 73% of all university R&D funding in 1965 to 63% in 1985, by which time a greater percentage of income was being derived from industry, assisted by the Bayh-Dole Act. The Second World War solved MIT’s inter-war financial crisis as Forman has noted:

MIT, on the inside track, emerged from the war with a staff twice as large as it had had before the war, a budget (in current dollars) four times as large, and a research budget ten times as large – 85% from the military services and their nuclear weaponeer, the AEC.

An examination of the funding arrangements for academic R&D during the post-WWI period to the Bayh-Dole Act in 1980 reveals dramatic change, not only in the amount of public money being transferred to universities, but also in the way that academic scientists developed much closer relationships with government and re-conceptualised the idea, practice and purpose of science. A new ideology of science was formed, encapsulated by its chief architect, Vannevar Bush in Science: The Endless Frontier, which redefined the “social contract” between scientists and government and argued for the importance of funding for “basic research”. Throughout these developments, dramatic changes were also taking place in the institutional forms of universities and the movement of academic labour from institution to institution and from research project to research project. So-called ‘labs’, like MIT’s Lincoln Lab were large semi-autonomous organisations in themselves, employing thousands of researchers and assistants. They became the model for later ‘science parks’ and spawned projects and research groups which then became independent ‘labs’ with staff of their own, such as the AI Lab. The University of Stanford learned from this model and it arguably led to the creation of Silicone Valley (Etzkowitz, 2002, Gillmor, 2004).

The AI Lab where Richard Stallman worked from 1971-1984, is legendary in the history of hacking (Levy, 1984). Like many MIT labs, it’s origins can be traced back to the Rad Lab through the Lincoln Lab and Research Laboratory of Electronics (RLE), where some of its personnel formerly worked and developed their thinking around Artificial Intelligence. The AI Lab began as a research group within Project MAC (Multiple Access Computer and Machine-Aided Cognition). Project MAC was set up in 1963 and originally led by Robert Fano, who had worked in the Rad Lab. J.C.R. Licklider, who helped establish the Lincoln Lab and worked at RLE, succeeded Fano as Director of Project MAC in 1968, having worked for DARPA, an agency of the Dept. of Defence, since 1962 and was responsible for the original Project MAC grant. Licklider remained Director of Project MAC until 1971, a year after Marvin Minsky, who worked in Project MAC’s AI research group, led the split to form the AI Lab in 1970, shortly before Stallman arrived as a Research Assistant. In this pre-history of hacker culture, little more needs to be said about the AI Lab as it is well documented in Levy’s book but what I wish to underline is the extent to which the AI Lab and Stallman’s ‘Garden of Eden’ was the strategic outcome of institutional, government and commercial relationships stretching back to the NRDC and the Rad Lab.

A “triple helix” or an “iron triangle”?

To sketch the intertwining history of such labs and projects at MIT alone is not straightforward, and a preliminary effort to do so shows, as one might expect, a great deal of institutional dynamism over the years. As economics conditions and government funding priorities shifted, institutions responded by re-aligning their focus all the while lobbying government and coaxing industry. Etzkowitz refers to this as the ‘triple helix’ of university-industry-government relations and evidence of a “second academic revolution”. Others have been more critical, referring to the “military-industrial-academic complex” – apparently Eisenhower’s original phrase – (Giroux, 2007), and “the “iron triangle” of self-perpetuating academic, industrial and military collaboration.” (Edwards, 1997, referring to Adams, 1982). From every perspective, there is no doubt that these changes gradually took place, spurred on at times by WWII and the Cold War. US universities (and later other national systems of HE) initially incorporated research as a social function of higher education (revolution #1) and then moved to “making findings from an academic laboratory into a marketable product” (revolution #2).  (e.g. Etzkowitz, 1997, 2001) Today, each university such as my own, has an ‘enterprise strategy’, ‘income generation’ targets and various other instruments, which can be traced back to the model that MIT established in the 1920s.

Although the accounts of Etzkowitz and Mowery et al are compelling, they only provide cursory mention of the struggle that has taken place over the years as the university has increased its ties with the military and industry. In particular, such accounts rarely dwell on the opposition and concern within academia to the receipt of large sums of defence funding and the ways in which academics circumvented and subverted their complicit role in this culture. A number of books have been written which do critically examine this ‘second revolution’ or the “iron triangle” (e.g. Edwards, 1997; Leslie, 1993; Heims, 1991; Chomsky et al, 1997; Giroux, 2007; Simpson et al, 1998; Noble, 1977; Turner, 2006; Mindell, 2002; Wisnioski, 2012).

As these critics’ accounts have shown, there has always been a great deal of unease and at times dissent among students and staff at MIT and other universities which were recipients of large amounts of military funding. Although I do not wish to generalise the MIT hackers of the 1960s and 70s as overtly political, they clearly were acting against the constraints of an intensifying managerialism within institutions across the US and in particular the rationalisation of institutional life pioneered by the Engineering profession and its ties with corporate America (Noble, 1977). Hackers’ attraction to time-sharing systems, the ability to personalise computing, programmatic access to the underlying components of computers and the use of computers for leisure activities is characteristic of a sub-culture within the university (Levy, 1985; Wisnioski, 2012) and to some extent the developing counter-culture of that period (Turner, 2006). Such accounts, I think, are vitally important to understanding the development of hacker culture as are the more moderate accounts of federal funding and the development of the entrepreneurial university.

My final post in this series highlights the relationship between venture capital, the university and hacking.

Hacking and the commercialisation of scientific research

I began this series of blog posts (my notes for a journal article), first outlining what might be considered the ‘flight of hackers’ from the university in the early 1980s, with the aim to then work backwards and establish the role of ‘the university’ (e.g. academia) in the development of hacker culture.  My second post began to focus on the role of MIT in particular, as a model of the ‘entrepreneurial university’ which other US universities copied and the generalisation of this model through the Bayh-Dole Act in 1980s. Next, I had intended to move on to discuss the role of military funding which underwrote the AI lab at MIT where Richard Stallman, “the last of the true hackers”, worked (Levy, 1984). However, I will leave that blog post for another day as there is more to say on the commercialisation of scientific research up to 1980, which I would argue played a significant role in the birth of hacking in academia (often regarded as 1961) and its agonising split when Stallman left his ‘Garden of Eden’ at MIT in 1984.

Until now, I have been drawing heavily on the work of Etzkowitz (2004), who has written about the rise of ‘entrepreneurial science’ at MIT and then Stanford. He draws upon the work of Mowery et al (2004) who provide an excellent account of the growth of patenting up to and in light of the Bayh-Dole Act. My interest is in their discussion of patenting prior to the 1980 Act, just four years before Stallman left MIT. As I wrote in my previous post, Stallman does not think that the Bahy-Dole Act had a direct impact on the “software war of 1982/83”, which makes sense in light of both Etzkowitz’s and Mowery’s accounts. By the time of the Bayh-Dole Act, MIT had been gradually internalising the commercialisation of its academic endeavours for decades, as had many other large research universities in the US, and Mowery concludes that the effect of the Act has been “exaggerated”  and that “much of the post-1980 upsurge in university patenting and licensing, we believe, would have occurred without the Act and reflects broader developments in federal policy and academic research.”

In this post, I want to highlight those broader developments in order to provide a richer account of the development of hacker culture, which although took flight from the university in 1984, has very much returned in the last decade with the growth of the ‘openness’ agenda and the development of initiatives such as open education, OER, MOOCs and open data.

Of course, hackers never left the university entirely, but the early 1980s does seem to mark a point where hacker culture assumed an independence from academic culture, a division we might relate to the later tension between ‘free software’ and ‘open source’ hackers. This tension between ‘freedom’ and ‘openness’ has been described by Stallman as a conflict in emphasis between the “ideas of freedom, community, and principle” (free software) and “the potential to make high quality, powerful software” (open source). Although the free software hackers have never wholly shunned the support of business, it is clear that Stallman believes the primary focus should be a moral and ethical one and that an emphasis on business concerns “can be disastrous” to the ideals of the free software movement.

This value-based conflict over the relationship between hackers and business is also found among academics today, with some resisting the gradual move to an ‘entrepreneurial university’ model, while others welcome it (see Etzkowitz 1998, 2000a, 2000b, 2001, 2002, 2003). In the US, the rise of ‘entrepreneurial science’ can be traced right back to the founding of the Land Grant universities, which I mentioned in my earlier post. Here, I want to focus specifically on the key instrument by which the commercialisation of science takes place: patents and their use in ‘technology transfer’ to industry. I should note that terms such as ‘entrepreneurial university’ and ‘technology transfer’ are not value-free and through discussing their historical development we might subject the development of hacker culture to a similar critique that Slaughter and Leslie have applied to ‘Academic Capitalism‘. In this post, I am developing the basis for that critique.

Patents as public good

Chapters 2-4 of Mowery’s book covers the history of patenting by US universities in great detail, pointing to the Morrill Act (1862) and the remit of Land Grant universities to serve their local regions by supporting agriculture and engineering (the ‘mechanical arts’). The book’s authors point to key “structural characteristics” of US higher education which laid the groundwork for later commercialisation of scientific research. First, with the introduction of the land grants, US high education has been notable for its scale and the autonomy of its institutions, devolving the responsibility of administering federal funds to the respective state governments. However, this autonomy came with a keenly felt responsibility to the local region and the founders and later Presidents of Land Grant universities, like MIT, understood their obligation to meet the needs of their local communities. This is evident in the land grant universities’ “utilitarian orientation to science” (10) and tendency to provide vocational education, combining training with research in methods to improve agriculture (12). Finally, US higher education was characterised by “the emergence of a unified national market for faculty at US research universities.” (13) Compared to other national systems of higher education, the departmental structures of US universities and the corresponding division into disciplinary degree programmes meant that academics focused on their contribution to their discipline over and above their institution. This resulted in a greater inter-institutional movement among academics and therefore a greater diffusion of ideas and research practices. Combined with the tendency to applied science and vocational education, this also led to a “rapid dissemination of new research findings into industrial practice – the movement of graduates into industrial employment.” (13) Mowery argues that these characteristics of US higher education

“created powerful incentives for university researchers and administrators to establish close relationships with industry. They also motivated university researchers to seek commercial applications for university developed inventions, regardless of the presence or absence of formal patent protection.” (13).

In effect, the discipline of engineering and the practice of applied science became institutionalised within US higher education, with MIT, founded in 1865, being one of the first universities to offer engineering courses. By offering its first electrical engineering course in 1882, “schools like MIT had become the chief suppliers of electrical engineers” (p 15, Mowery quoting Wildes and Lingren) in the US by the 1890s, meeting a national need by an emerging electricity-based industries. I will address the growth of Engineering as a discipline and the political tension within the discipline in a later post as it seems to me that a counter-culture among Engineers can be found in hackers today.

The moral dilemma that Stallman faced during the “software wars of 1982/83” is familiar to many academics and the “patent problem” has been the subject of much heated debate throughout the history of the modern university (see Mowery, ch. 3). In the US, although universities have worked in collaboration with industry since the founding of the land grant institutions, they remained sensitive to the handling of commercial contracts until the early 1970s, when the commercialisation of science was internalised in the structures and processes of university research administration. Debates often focused around the pros and cons of patenting inventions derived from research, with some academics believing that patents were necessary to protect the reputation of the institutions, for fear that the invention might be “wrongfully appropriated” by a “patent pirate” (Mowery, p. 36). Thus, the argument for patenting research inventions was based on the necessity of ‘quality control’, thereby preventing the “incompetent exploitation of academic research that might discredit the research results and the university.” (37). This view saw patents as a way to “enhance the public good” and “advance social welfare” by protecting the invention from “pirates” who might otherwise patent the invention themselves and charge extortionate prices. Within the early pre-WWII debates around the use of patents by US universities, it was this moral argument of protecting a public good that led to patents being licensed widely and for low or no royalties. In fact, the few universities that began to apply for patents on their inventions did so through the Research Corporation, rather than directly themselves, so as to publicly demonstrate that their work was not corrupted by money.

The Research Corporation

The Research Corporation (see Mowery, ch. 4) was established by Frederick Cottrell of the University of California at Berkeley in 1912. Cottrell had received six patents for his work on the electrostatic precipitator and felt strongly that his research would receive more widespread utility if it were patented than if it were provided to the public for free. His view was that research placed in the public domain was not exploited effectively: “what is everybody’s business is nobody’s business.” (Mowery, quoting Cottrell, p. 59). However, Cottrell did not wish to involve university administrators in the management of the patents as he also believed that this would set a dangerous precedent of too closely involving non-academics in the scientific endeavours of researchers. He worried that it would place an expectation on academics to continue to produce work of commercial value, increasing the “possibility of growing commercialism and competition between institutions and an accompanying tendency for secrecy in scientific work.” (Mowery, quoting Cottrell, p. 60)

Cottrell’s intentions appear to have been sincere. He was not interested in any significant personal accumulation of wealth derived from his patents and believed that the scientific endeavour and the public would benefit from the protection given by patents, but they required an independent organisation to manage them. Cottrell founded the Research Corporation to meet these beliefs and donated his patents to the Corporation in the form of an endowment to manage and re-distribute income received as research grants. Cottrell regarded the formation of the Research Corporation as “a sort of laboratory of patent economics” and from its inception, states Mowery, “he envisioned the Research Corporation as an entity that would develop and disseminate techniques for managing intellectual property of research universities and similar organisations.” (60)

During its 70 year history, this “laboratory of patent economics” found it difficult to sustain its activity, despite a number of changes in approach. In its early pre-WWII period, it was an incubator for commercial applications of Cottrell’s patents, employing 45 engineers within the first five years, who not only designed applications for the use of precipitators, but installed them for clients, too. When Cottrell’s endowment to the Corporation began to run out, the organisation looked to researchers in other technology fields to donate their inventions. In effect, it seems the Corporation began to acquire patents so as they could afford to keep managing existing patents with dwindling returns and continue its philanthropic mission. The Research Corporation attracted a number of donations of patents from researchers with similar philanthropic agendas, as Mowery notes:

The expanding research collaboration between US universities and industry and the related growth of science-based industry increased the volume of commercially valuable academic research in the 1920s and 1930s, resulting in more and more requests from academic inventors to the Research Corporation for assistance in the management of patenting and licensing. (62)

So, for the first couple of decades of the Corporation, much of the income which sustained the organisation came from its work relating to Cottrell’s original precipitator inventions. As these revenues decreased, the Research Corporation looked for other sources of income. This coincided with the Great Depression and a time when universities were struggling to remain solvent, which was the situation at MIT. Rather than merge with Harvard, its President, Karl Compton, charged Vannevar Bush, then Dean of MIT’s School of Engineering, with developing a patent policy for the university. With this, MIT asserted an institutional claim on any invention resulting from research funded by the university. However, the patent committee recommended that MIT should be relieved “of all responsibility in connection with the exploitation of inventions while providing for a reasonable proportionate return to the Institute in all cases in which profit shall ensue.” (Mowery, 64) To undertake this, MIT drew up an ‘Invention Administration Agreement’ (IAA) with the Research Corporation, which not only created a precedent for other universities, but also marked a clear shift from the individual ownership of research inventions, many of which were donated to the Corporation by philanthropic academics, to institutional ownership, which anticipated an income from that research (a 60/40 split between MIT and the Corporation). As a result, Cottrell’s original vision of creating an independent charitable organisation that turned patent income into grants for further scientific work, had to meet the challenges of the Depression and the the unpredictable nature of successfully exploiting research.

MIT institutionalised the relationship with Research Corporation, using it to exclusively manage its patents from 1937 to 1946, eventually cancelling its contract with the Corporation in 1963, by which time concerns about directly managing the commercial exploitation of its research had largely disappeared and the in-house skills to undertake the necessary administration had been developed over the course of their relationship with the Research Corporation. The partnership between MIT and the Research Corporation was never very profitable, with the Corporation making net losses during the decade that it exclusively managed MIT’s patents. However, during and following WWII, the scale of research activity in US universities markedly increased. Mowery notes that

the expansion of military and biomedical research conducted in US universities during and after the war had increased the pool of potentially patentable academic inventions, and federal funding agencies compelled universities to develop formal patent policies during the early post-war period. The Research Corporation negotiated IAAs, modelled on the MIT agreement, with several hundred other US universities during the 1940s and 1950s. (66)

The history of the Research Corporation, as told by Mowery et al, is a fascinating one, pointing to the difficulties in successfully commercialising research through the licensing of patents. During 1945 to 1980 the top five patents held by the Corporation accounted for the majority of its income and “although its portfolio of institutional agreements, patents, and licenses grew during the 1950s, growth in net revenues proved elusive.” (69)

The latter years of the Research Corporation were spent trying to build relationships with university staff in an effort to develop the necessary skills to identify potentially commercial inventions across different research disciplines. Ironically, in its attempt to off-load some of the administrative costs to institutions the Corporation effectively trained university administrators to manage without its assistance, eroding the competitive advantage that the Corporation previously held. During the 1970s, universities were also ‘cherry picking’ inventions to patent themselves, rather than the Research Corporation, in an effort to benefit from all of the potential revenue rather than a cut of it. “The Research Corporation’s 1975 Annual Report noted that many universities were beginning to withhold valuable inventions.” (Mowery, 77) This can be seen as a clear indication that the earlier concerns about universities directly exploiting their research had been largely overcome, and that during the 1960s and 1970s, the institutional structures and skills within the larger research universities like MIT, had been put in place, partly with the assistance of the Research Corporation.

Conclusion

The institutionalised commercialisation of research at MIT began in the 1930s, when MIT had developed one of the first university patent policies, clearly indicating that the institution had a claim to the profits deriving from its research activity. Richard Stallman joined the DARPA-funded AI Lab at MIT as a Research Assistant in 1971, eight years after MIT had cancelled its Agreement with the Research Corporation and fully internalised the process of identifying and managing patents. In this respect, MIT was at the forefront of a movement among US universities to systematically commercialise their research – to engage in ‘entrepreneurial science’ – where research groups are run as de facto firms (Etkowitz 2003). The military-funded work in Artificial Intelligence during the 1970s, which Stallman contributed to, can be understood within the context of the academy’s role in supporting the Cold War effort (Leslie, 1994; Chomsky et al, 1997; Simpson, 1998). This programme of funded research across a number of disciplines consequently increased the number of commercial opportunities (‘technology transfers’), not least in the fields of electronics, engineering and the emerging discipline of computer science. Indeed, Symbolics, the company which was spun off from the AI Lab in the early 1980s, attracting most of Stallman’s fellow hackers, produced Lisp Machines for the Cold War military market in Artificial Intelligence, eventually going bust when the Cold War ended.

My point in discussing the rise in the use of patents to exploit government funded research in US universities during the twentieth century is to show how the split that took place in the AI Lab in the early 1980s, devastating Stallman and compelling him to leave, was a result of a long process of US universities, led by MIT, internalising the idea, skills and processes by which to make money from research. Just as the development of Land Grant universities and the practice of applied science, patronised by vast sums of government funding, gave birth to hacker culture in the early 1960s, that culture remained tied to structural changes taking place within US higher education during the 1960s and 1970s and a shift towards entrepreneurialism. Stallman’s ‘Garden of Eden’ was, I think, always going to be a short-lived experience as he joined MIT at the beginning of a decade where government funding from the three defence, space and energy agencies was on the decline from a peak of 80% of all federal funding in 1954 to 30% in 1970. As funding in these areas was on the decline and the licensing of patents and overall share of research funding coming from industry was on the rise (see Mowery et al 23-27), it seems inevitable that the institution which had given birth to hacking in the early 1960s would try to valorise the work of these researchers as optimally as it could. Stallman has said that he and his colleagues did not object to the commercialisation of their work, but the instruments of this advancing entrepreneurialism (patents, copyright, licenses) were at odds with at least one of the long established “institutional imperatives” of scientific practice: “Communism” (Merton, 1973).

In a sincere yet novel way, Frederick Cottrell recognised this in 1912, when he established the Research Corporation as a charity and donated his patents so as to benefit public social welfare and provide philanthropic grants for further scientific work. However, twenty years later, in the midst of the Depression, MIT asserted institutional interest in the ‘intellectual property’ of its researchers and sought a majority cut of the income deriving from its patents. It took a further three decades or so for MIT to relinquish the use of the Research Corporation altogether and fully institutionalise the commercial exploitation of scientific research. Writing in 1973, Merton’s “communism” as a foundation of the scientific ethos seems both an ironic use of the term given that most scientific research in the US was being funded through the Cold War agencies, and removed from the reality of what was happening within institutions as they advanced ‘entrepreneurial science’. Merton understood this, and his description of the “communal character of science” (Merton, 274) surely refers more to a liberal ideal than actual practice, just as Stallman’s characterisation of ‘freedom’ draws heavily on liberal political philosophy but is continuously confronted with the reality of capitalist valorisation. A blog post for another day…

The ‘MIT model’ and the history of hacker culture

Lisp Machines Inc. brochure
Lisp Machines Inc. brochure

In my previous post from this series, I outlined a period in the early 1980s when the work of hackers at MIT was commercialised through the use of venture capital and as a result, those hackers stopped sharing code. As a response to this, Richard Stallman left MIT to start the GNU project and within a few years had created the GPL ‘copyleft’ license, the most popular open source license in use today.

I concluded by pointing to the Bayh-Dole Act (1980) as an event that is worth understanding when examining the role of universities in the history of hacker culture. In this post, I want to outline what I mean by this and throw out a few ideas that, I admit, need to be more fully explored.

The Bayh-Dole Act

The Bayh-Dole Act was enacted by US Congress in December 1980 to clarify title of ownership and encourage universities to freely exploit the IP they generated through government funded research. Until that time IP arising from research that was federally funded was, by default, owned by the government and it was left to each federal agency to determine IP arrangements. Since WWII, the majority of research taking place in US universities was (and remains) government funded and in the midst of the industrial downturn of the 1970s, IP resulting from research was recognised as an under-exploited source of national economic potential. ((In 2004 it was 62% of all university research. See AAU (2006) University Research: The Role of Government Funding))

As an aside, in the UK, the Patent Act of 1977 gave employers the legal entitlement to employees’ inventions, but it wasn’t until the Department of Trade and Industry’s White Paper, Realising our Potential (1993), that universities were encouraged to pursue patents arising from their research. Similarly, other countries followed the Bayh-Dole Act in the 1990s and 2000s. ((Rigby and Ramlogan outline this in the 2012 report, Compendium of Evidence on the Effectiveness of Innovation Policy Intervention: Support Measures for Exploiting Intellectual Property.)) Until that time, the title to IP generated by publicly funded research in UK universities was usually controlled by the National Research Development Corporation (1948-1981), the National Enterprise Board (1975-1981) or the British Technology Group (under public ownership from 1981-1991).

Similarly, prior to the Bayh-Dole Act in the US, federally funded IP was first controlled by the National Defence Research Committee (NDRC) (1940-1947), then the Office for Scientific Research and Development (OSRD) (1941-1947), and then by several organisations: the NSF, NIH, DOD, NASA, DOE, USDA and others. ((See Mowery, (2004) Ivory Tower and Industrial Innovation: University-Industry Technology Transfer Before and After the Bayh-Dole Act.)) Etzkowitz has noted that despite significant levels of federal funding to US universities since WWII, in 1978, less than 4% of government-owned patents had been licensed. By creating a mechanism for universities to own the title to their research, the Bayh-Dole Act provided an incentive and driver for institutional change and whereas “previously only very few universities had the interest and capabilities to patent and license technology invented on campus”, during 1980-1990, the number of universities with technology transfer offices went from 25 to 200. Etzkowitz argues that this played a role in the development of Silicone Valley and made research universities “an explicit part of the US innovation system by restructuring the relationship among university, industry and government.” Out of the Bayh-Dole Act, arose incubators, research parks, technology transfer arrangements and other entrepreneurial features of modern universities.

When I began to think about the Bayh-Dole Act, I wondered what effect it might have had on Richard Stallman, who around the same time as the Act was in place, saw hackers leaving the AI Lab and joining two start-up companies (Lisp Machines Inc. and Symbolics Inc.) which were pursuing the exploitation of government-funded research and development that originated at MIT.  Was the Bayh-Dole Act a catalyst in the development of the GNU project? The answer was, no, not really. In pursuing this line of inquiry I contacted Stallman who said that he wasn’t aware of the Bayh-Dole Act at that time.

“In 1980 we all supported commercial fabrication of Lisp machines, because we wanted people to be able to buy them.  Thus, no pressure on us was needed on that particular point. Only the details were controversial.  And we did not foresee the consequences. It could be that MIT’s method of releasing the source code to the two competing companies, which both made it proprietary and set the stage for the software war of 82/83, was facilitated in some way by Bayh-Dole. But I don’t know whether that is so.” ((Email from Stallman, 19th December 2012))

Stallman’s reply is a consistent reminder that his work has never been in opposition to the commercial exploitation of software, but rather the prevention of restrictions on freedom. Although there were clearly differences of opinion among MIT hackers about the way in which Lisp Machines should be commercialised, with a minority opposed to VC funding, the culture of MIT in 1980 was such that the Bayh-Dole Act was following MIT, rather than imposing anything significantly different onto its technology transfer processes. The Act was certainly one of several instruments that provided clarity around the ownership and commercial potential of software and other IP during that period (others were the Copyright Act of 1976 and two amendments in 1980), and by requiring all US universities to consider ways in which government funded research could be commercially exploited, research was to some extent marketised, creating a more intensive environment for technology transfer in which MIT and other universities found themselves competing. Etzkowitz summarises this as follows:

“Starting from practices developed early in the twentieth century at MIT, university technology transfer had become [with the Bayh-Dole Act] a significant input into industrial development. William Barton Roger’s mid-nineteenth-century vision of a university that would infuse industry with new technology has become universalied from a single school to the entire US academic research system. Greatly expanded with federal research funding, the US academic enterprise has become a key element of an indirect US industrial policy, involving university, industry and government. The origins and effects of the Bayh-Dole Act are a significant chapter in the spread of the MIT model and the rise of entrepreneurial science.” (Etzkowitz, 113)

The question then, is not about what effect the Bayh-Dole Act had on Stallman and his fellow hackers in 1980, but rather what was the ‘MIT model’, which was later universalised by the Bayh-Dole Act, and what unique role did the ‘MIT model’ play in the history of hacker culture?

The MIT model

MIT began as a ‘Land Grant’ university, partially funded by a government grant to establish science-based universities which would “promote the liberal and practical education of the industrial classes” ((Read a transcript of the Morrill Act)) while undertaking research to improve agriculture. Land Grants were provided under the Morrill Act of 1862 and were a response to many years of campaigning by farmers and agriculturalists for research institutions that would contribute to the improvement of US farming. The Act led to States being allocated federal land which was to be sold or donated in order to establish such universities. The European Polytechnic movement was also gaining popularity in the US and seen as a model for new applied science universities in contrast to the largely teaching universities that existed at that time. Following the Morrill Act, the Hatch Act (1867) and Smith Lever Act (1918) again encouraged applied research in US universities as well as building capacity for technology transfers, again with a specific focus on the needs of agriculture.

Until the Land Grant universities of the late 19th century, there were no ‘research universities’ in the US and even academic staff dedicated to research were rare. ((Richard C. Atkinson, William A. Blanpied (2008), Research Universities: Core of the US science and technology system)) Founded as both a teaching and research university with a remit to undertake applied science that could be transferred to industry, MIT has always had close contact with private enterprise; from early on in its history MIT had employed engineers from industry as members of its academic faculty. By the 1920s, these academics were noted for their consulting activities to the extent that there was tension within MIT between academics who felt that it was their job to focus on teaching and the needs of students, and those who spent a significant portion of their time focusing on the needs of industry. During the Great Depression of the 1930s, MIT was forced to confront this issue as it was accused by private consultants of effectively subsidizing academics to consult, amounting to unfair competition. As a result, a policy was established called the ‘one fifth rule’, whereby MIT academics could spend a day a week using MIT resources to undertake consulting services. “Such activities as providing advice, testing materials in university laboratories and solving problems at company sites had become so much a part of the work of academic engineering professors that it proved impossible to disentangle them from the academic role. Prominent professors felt that their connection to the industrial world through consultation was essential to their research and teaching.” ((Etzkowitz, 37)) The role of academics acting as consultants is now commonplace in universities but in the US, it was at MIT where the practice was first formalised.

To protect and further exploit the industrial research undertaken, the first patent policy at MIT was developed in 1932.  Such a policy allowed MIT to assert ownership of its research, which at that time was mostly internally funded, rather than it being freely exploited by industry partners. With a patent policy in place, MIT could license its research to industry and control its intellectual property.

To support this academic-industrial partnership, MIT was one of the first universities to establish a department to handle its commercial contracts and its Division of Industrial Cooperation (DIC) later became the model by which the government provided research funding for all other universities. Being unique among universities in having such a department, MIT was in an advantageous position when the US entered WWII. Between 1940 and 1942, MIT’s research funding increased from $105,000 to $5.2M (c. x50!) thanks to its foresight in starting internally-funded military research early and having bureaucratic processes in place to handle the large increase in research contracts. Prior to WWII, there was little government funding of research outside of the land-grant interests of agriculture with around 5% of university funding coming from government and 40% of that relating to agriculture. Since 1946 federal funding to US universities has risen to between 12% and 26% of income, settling, on average, at around 15% in the 1980s; this funding is dispersed very unevenly and MIT is always one of the top recipients. ((See Lewontin in The Cold War and the University))

There is much to say (in a later post) about what gave MIT the privileged position as a centre for government research during WWII, which led to the formation of military-funded Labs such as that which Stallman worked in during the 1970s.  By 1940, through its entrepreneurial efforts, which I have just skimmed over here, MIT had become the model for the military-industrial-academic relationship which has continued to this day.

An entrepreneurial environment

In 1978, the AI Lab received additional DARPA funding for Lisp Machines and in 1979 discussions began taking place within the Lab about the commercialisation of Lisp Machines. Differences emerged between Noftsker and Greenblatt around the form that commercialisation should take and this delayed the enterprises until 1981. It would be interesting to know whether the Bayh-Dole Act did help crystallise decision-making among AI Lab management at that time, although more likely it was the knowledge that Xerox had developed a Lisp Machine in 1979 that caused Noftsker and Greenblatt to consider commercialising their work. Tom Knight, who with Greenblatt was the original designer of MIT’s Lisp Machine, has said that in the late 1970s, “MIT was not in the business of making computers for other labs, and was not about to build them” ((If It Works, It’s Not AI: A Commercial Look at Artificial Intelligence Startups)) but as external demand for them grew, Noftsker and Greenblatt developed their own responses in the form of Symbolics Inc. and Lisp Machines Inc. and Stallman’s hacker community began to crumble.

Stallman worked to keep the Lisp Machine source code free to share until 1983, when, presumably, he saw little hope of reforming the community of hackers that he longed for and could see that academic culture and the intellectual property regime had changed in ways that were no longer compatible with sharing software. MIT’s long history of entrepreneurialism and the more recent obligation to commercialise government-funded research suggests to me that the writing was on the wall for Stallman and the free sharing of source code within academia until the late 1980s when responses from within academia to this new regime were created: The GPL (GNU), BSD (University of California, Berkeley) and MIT licenses.

The ‘MIT model’, later universalised within the US as the Bayh-Dole Act, provided universities with the legal means and obligation to exploit federally-funded research. In doing so, it established not just a mechanism for patenting but an academic environment that was overall more entrepreneurial as it allowed universities to create partnerships with individual academics who would themselves profit from their research. Researchers working at MIT in the 1970s, were encouraged and well-supported to look for commercial opportunities deriving from their work. What the Bayh-Dole Act did for MIT and other universities, was provide clarification and incentives for exploiting government-funded R&D, which were previously absent. As Etzkowitz states: “In addition to rationalising and legitimsing university patenting and licensing, the law induced a phsychological change in attitudes towards technology transfer as well as an organisational change in encouraging the creation of offices for this purpose.” (p.114)

In my next post, I shall address the role of military funding in the development of hacker culture and the Labs which became playgrounds for hacking.

The role of the university in the development of hacker culture

PDP-10
A PDP-10 computer from the 1970s.

The picture above is of a PDP-10 computer similar to that found in universities during the 1970s. The PDP-10 was developed between 1968-1983 by Digital Equipment Corporation (DEC) and is a useful point of reference for looking backwards and forwards through the history of hacking.  The PDP-10 (and its predecessor the PDP-6) were two of the first ‘time-sharing‘ computers, which among other significant technological developments, increased access to computers at MIT. Hackers working in the MIT Artifical Intelligence Lab (AI Lab) wrote their own operating system for the PDP-6 and PDP-10 called ITS, the Incompatible Timesharing System, to replace the Compatible Time Sharing System (CTSS) developed by MIT’s Computation Centre. Richard Stallman, who Levy describes as “the last true hacker”, was a graduate student and then AI Lab staff system hacker who devoted his time to improving ITS and writing applications to run on the computer. Stallman describes the Lab during the 13 years he worked there as “like the Garden of Eden”, a paradise where a community of hackers shared their work.

However, this period came to a bitter end in 1981, when most of the hackers working with Stallman left to join two companies spun off from the Lab. Four left to join Lisp Machines, Inc. (LMI), led by Stallman’s mentor, the ‘hacker’s hacker’, Richard Greenblatt, while 14 of the AI Lab staff left to join Symbolics, Inc. a company led by Russell Noftsker, who was Head of the AI Lab for eight years and had hired Stallman. (Noftsker had taken over from the original Director, Marvin Minsky, who worked on MIT’s DARPA-funded Project MAC, which later became the AI Lab). For a while in 1979, Noftsker and Greenblatt discussed setting up a company together that sold Lisp Machines, but they disagreed on how to initially fund the business. Greenblatt wanted to rely on reinvesting early customer orders and retain full control over the company while Noftsker was keen to use a larger amount of venture capital, accepting that some control of the company would be given up to the investors. Greenblatt and Noftsker couldn’t agree and so set up companies independent of each other, attracting most of the ITS hackers in the AI Lab to the extent that Stallman’s beloved community collapsed. With maintenance and development of ITS decimated, administrators of the AI Lab decided to switch to TOPS-20, DEC’s proprietary operating system when a new PDP-10 was purchased in 1982. A year later, DEC ceased production of the PDP-10 which Stallman described as “the last nail in the coffin of ITS; 15 years of work went up in smoke.”

Lisp Machines went bankrupt in 1985 while Symbolics remained active until the end of the Cold War when the military’s appetite for AI technologies slowed down and subsequently the company declined. One more thing worth noting about these two AI Lab spin-offs is that within a year of doing business, Stallman and Symbolics clashed over matters of sharing code. Having been deserted by his fellow hackers, Stallman made efforts to ensure that everyone continued to benefit from Symbolics enhancements to the Lisp Machine code, regularly merging Symbolics code with MIT’s version, which Greenblatt’s company used. Stallman was acting as a middle-man between the two code bases and the split community of hackers. Like other MIT customers, Symbolics licensed the Lisp Machine code from MIT and began to insist that their changes to the source code could not be redistributed beyond MIT, thereby cutting off Greenblatt’s Lisp Machines, Inc. and other MIT customers. Stallman’s efforts to keep the old AI Lab hacker community together through the sharing of distributed code came to an end.

In an essay by Stallman, he writes about how this was a defining moment in his life from which he resolved to start the GNU Project and write his own ‘free’ operating system. In 1984, Stallman left his job at MIT so as to ensure that the university didn’t have any claim to the copyright of his work, however he remained as a guest of the AI Lab at the invitation of the Director, Patrick Winston, and still does so today. If you are at all familiar with the history of free software and the open source movement, you will know that Stallman went on to develop the General Public License in the late 1980s, which has become the most popular open source license in use today. Advocates of open education will know that the GPL was the inspiration for the development of Creative Commons licenses in 2001. Arguably, the impact of spinning off Lisp Machines and Symbolics from the AI Lab in 1981 is still being felt and the 18 hackers that left to join those divergent startups can be considered as paradigmatic for many hackers since, conscious of whether they are working on shared, open source software or proprietary software.

Everything I have described above can be easily pieced together in a few hours from existing sources. What is never discussed in the literature of hacking is the institutional, political and legal climate during the late 1970s and early 1980s, and indeed the decades prior to this period that led to that moment for Stallman in 1984. In fact, most histories of hacking begin at MIT in 1961 with the Tech Model Railroad Club and, understandably, concentrate on the personalities and development of an ethic within the hacker community. What is never mentioned is what led to Greenblatt and Noftsker deciding to leave that ‘Garden of Eden’ and establish firms. What instruments at that time encouraged and permitted these men to commercialise their work at MIT? Much of what I have written above can be unravelled several decades to show how instrumental the development of higher education in the USA during the 20th century was to the creation of a hacker culture. The commercialisation of applied research; the development of Cybernetic theory and its influence on systems thinking, information theory and Artificial Intelligence; the vast sums of government defense funding poured into institutions such as MIT since WWII; the creation of the first venture capital firm by Harvard and MIT; and most recently, the success of Y-Combinator, the seed investment firm that initially sought to fund student hackers during their summer break, are all part of the historiography of hacking and the university.

Over the next few blog posts I will attempt to critically develop this narrative in more detail, starting with a discussion of the Bayh-Dole Act, introduced in 1980.

References

I’ve linked almost exclusively to Wikipedia articles in this blog post. It’s a convenient source that allows one to quickly sketch an idea. Much needs to be done to verify that information. There are a few books worth pointing out at this introductory stage of the narrative I’m trying to develop.

The classic journalistic account of the history of hacking is by Stephen Levy (1984) Hackers. Heroes of the Computer Revolution. I found this book fascinating, but it begins in 1958 with the Tech Model Railroad Club (chapter 1) and doesn’t offer any real discussion about the institutional and political cultures of the time which allowed a ‘hacker ethic’ (chapter 2) to emerge.

Eric Raymond’s writings are also worth reading. Raymond is a programmer and as a member of the hacker tradition has made several attempts to document it, including the classic account of the Linux open source project, The Cathedral and the Bazaar, and as Editor of the Jargon File, a glossary of hacker slang. Again, Raymond’s Brief History of Hackerdom, begins in the early 1960s with the Tech Model Railroad Club and does not reflect on the events leading up to that moment in history.

Another useful and influential book on hackers and hacking is Himanen (2001) The Hacker Ethic. Himanen is a sociologist and examines the meaning of the work of hackers and their values in light of the Protestant work ethic.

Tim Jordan’s 2008 book, Hacking, is a general introduction to hacker and cracker culture and provides an insightful and useful discussion around hacking and technological determinism. Like Himanen, Tim Jordan is also a sociologist.

Stallman’s book, Free Software Free Society (2002), offers a useful direct account of his time at MIT in chapter 1.

Sam Williams’ biography of Stallman, Free as In Freedom (2002), later revised by Stallman in collaboration with Williams (2010), is essential reading. Chapter 7 ‘A Stark Moral Choice’, offers a good account of the break up of Stallman’s hacker paradise in the early 1980s.

E. Gabriella Coleman’s book, Coding Freedom. The Ethics and Aesthetics of Hacking (2012) is an anthropological study of hackers, in particular the free software hackers of the Debian Linux/GNU operating system. Coleman’s book is especially useful as she identifies hackers and hacking as a liberal critique of liberalism. This might then be usefully extended to other movements that hackers have influenced such as Creative Commons.

Seminar: Hacking and the University

Hacking and the University

The role of the university in the development of hacker culture

The standard history of hacking begins with the Tech Model Railroad Club at MIT in 1961 and has continued to be closely associated with academic culture. Why is this so and what intellectual and institutional culture led to the development of a ‘hacker ethic’?

This seminar will propose a history of hacking in universities from the early 20th century, taking into consideration the role of military sponsored research, the emergence of the ‘triple helix’ of academic, commercial and government enterprise, the influence of WWII cybernetic theory, and how the meritocracy of academia gave rise to Y-Combinator, the most successful Internet angel investment fund there is today.

Part of the Centre for Educational Research and Development’s Thinking Aloud seminar series.

November 27th, 1-2pm, MB1012

Conjuring value out of OpenCourseWare

I came across a post by David Wiley the other day, concerning MIT’s OpenCourseWare initiative and it got me thinking about MIT and OER in general…

I would like to suggest that OER can be viewed as another example of the mechanisation of human work, which seeks to exploit a greater amount of collective abstract labour-power while reducing the input and therefore reliance on any one individual’s concrete contribution of labour. It’s important to understand what is meant by abstract labour and how it relates to the creation of value, which we’ll see is at the core of MIT’s OCW plans for sustainability. Wendling provides a useful summary of how technology is employed to create value out of labour.

Any given commodity’s value can be seen either from the perspective of use or from the perspective of exchange: for enjoyment consumption or for productive consumption. Likewise, any given worker can be seen as capable of concrete labor or abstract labor-power. Labor is a qualitative relation, labor-power its quantitative counterpart. In capitalism, human labor becomes progressively interchangeable with mechanized forces, and it becomes increasingly conceptualized in these terms. Thus, labor is increasingly seen as mere labor-power, the units of force to which the motions of human work can be analytically reduced. In capitalism, machines have labor-power but do no labor in the sense of value-creating activity. ((Amy Wendling (2009) Karl Marx on Technology and Alienation. p. 104))

The use of technology in attempts to expand labour’s value creating power is central to the history of capitalism. From capitalism’s agrarian origins in 16th century England, technology has been used to ‘improve’ the value of private property. In discussing value, we should be careful not to confuse it simply with material wealth, which is a form of value expressed by the quantity of products produced.

Marx explicitly distinguishes value from material wealth and relates these two distinct forms of wealth to the duality of labor in capitalism. Material wealth is measured by the quantity of products produced and is a function of a number of factors such as knowledge, social organization, and natural conditions, in addition to labor. Value is constituted by human labor-time expenditure alone, according to Marx, and is the dominant form of wealth in capitalism. Whereas material wealth, when it is the dominant form of wealth, is mediated by overt social relations, value is a self-mediating form of wealth. ((Postone (2009), Rethinking Marx’s Critical Theory in History and Heteronomy: Critical Essays, The University of Tokyo Centre for Philosophy, p. 40))

MIT’s OpenCourseWare initiative provides a good example of how Open Education, currently dominated by the OER commodity form, is contributing to the predictable course of the capitalist expansion of value. Through the use of technology, MIT has expanded its presence in the educational market by attracting private philanthropic funds to create a competitive advantage, which has yet to be surpassed by any other single institution. In this case, technology has been used to create value out of the labour of MIT academics who produce lecture notes and lectures which are then captured and published on MIT’s corporate website. In this process, value has been created for MIT through the application of science and technology, which did not exist prior to the inception of OCW in 2001. The process has attracted $1,836000 of private philanthropic funding, donations and commercial referrals. In 2009, this was 51% of the operating costs of the OCW initiative, the other 49% being contributed by MIT. ((See David Wiley’s blog post on MIT’s financial statement))

In terms of generating material wealth for MIT, it is pretty much breaking even by attracting funds from private donors, but the value that MIT is generating out of its fixed capital of technology and workers should be understood as distinct from its financial accounts. Through the production of OERs on such a massive scale, MIT has released into circulation a significant amount of capital which enhances the value of its ‘brand’ (later I refer to this as ‘persona’) as educator and innovator. Furthermore, through the small but measurable intensification of staff labour time by the OCW initiative, additional value has been exorted from MIT’s staff, who remain essential to the value creating process but increasingly insignificant as individual contributors. As a recent update from MIT on the OCW initiative shows, ((OpenCourseWare: Working Through Financial Changes)) following this initial expansion of “the value of OCW and MIT’s leadership position in open education” and with the private philanthropic funding that has supported it due to run out, new streams of funding based on donations and technical innovation are being considered to “enhance the value of the materials we provide.” As the report acknowledges, innovation in this area of education has made the market for OER competitive and for MIT to retain its lion’s share of web traffic, it needs to refresh its offering on a regular basis and seek to expand its educational market footprint. Methods of achieving this that are being discussed are, naturally, technological: the use of social media, mobile platforms and a ‘click to enroll’ system of distance learning. Never mind that the OERs are Creative Commons licensed, ‘free’ and, notably, require attribution in order to re-use them, the production of this value creating intellectual property needs to be understood within the “perpetual labour process that we know better as communication.” ((Söderberg, Johan (2007) Hacking Capitalism. The Free and Open Source Software Movement, p. 72)) Understood in this way, the commodification of MIT’s courses occurs long before the application of  a novel license and distribution via the Internet. OCW is simply “a stage in the metamorphosis of the labour process”. ((Soderberg, 2007, p. 71)).

MIT’s statement concerning the need to find new ways to create value out of their OCW initiative is a nice example of how value is temporally determined and quickly falls off as the production of OERs becomes generalised through the efforts of other universities. Postone describes this process succinctly:

In his discussion of the magnitude of value in terms of socially-necessary labor-time, Marx points to a peculiarity of value as a social form of wealth whose measure is temporal: increasing productivity increases the amount of use-values produced per unit time. But it results only in short term increases in the magnitude of value created per unit time. Once that productive increase becomes general, the magnitude of value falls to its base level. The result is a sort of treadmill dynamic. On the one hand, increased levels of productivity result in great increases in use-value production. Yet increased productivity does not result in long-term proportional increases in value, the social form of wealth in capitalism. ((Postone, 2009, p. 40))

Seen as part of MIT’s entire portfolio, the contribution of OCW follows a well defined path of capitalist expansion, value creation and destruction and also points to the potential crisis of OER as an institutional commodity form, being the dimunition of academic labour, which is capitalism’s primary source of value, and the declining value of the generalised OER commodity form, which can only be counteracted through constant technological innovation which requires the input of labour. As Wendling describes, this is part and parcel of capitalism, to which OER is not immune.

Scientific and technological advances reduce the necessary contribution of living labor to a vanishing point in the production of basic commodities. Thus, they limit the main source of the capitalist’s profit: the exploitation of the worker. This shapes the capitalist use of science and technology, which is a use that is politicized to accommodate this paradox. In this usage, the introduction of new machinery has two effects. First, the machine displaces some workers whose functions it supplants. Second, the machine heralds a step up in the exploitation of the remaining workers. The intensity and length of their working days are increased. In addition, as machinery is introduced, capital must both produce and sell on an increasingly massive scale. Losses from living labor are recompensed by the multiplication of the small quantities of remaining labor from which value can be extorted. In all of these ways, capitalism and technological advancement, far from going hand in hand, are actually inimical to one another, and drive the system into crisis. In this respect, a straightforward identification of constantly increasing technicization with capitalism misses the crucial dissonance between the two forces. ((Wendling 2009, p.108))

The example of MIT given above is not intended to criticise any single member of the OCW team at MIT, who are no doubt working on the understanding that the initiative is a ‘public good’ – and in terms of creating social wealth, it is a public good. My suggestion here is to show how seemingly ‘good’ initiatives such as OCW, also compound the social relations of capitalism, based on the exploitation of labour and the reification of the commodity form.

Furthermore, being the largest single provider of ‘Open Education’, MIT’s example can be carried over into a discussion of the Open Education movement’s failure to provide an adequate critique of the institution as a form of company and regulator of wage-work.

As Neocleous has shown, ((Neocleous, Mark (2003) Staging Power: Marx, Hobbes and the Personification of Capital)) in modern capitalism the objectification of the worker as the commodity of labour serves to transform the company into a personified subject, with greater rights under, and fewer responsibilities to, the law than people themselves. As the university increasingly adopts corporate forms, objectives and practices, so the role of the academic as abstract labour is to improve the persona of the university. Like many other US universities, MIT award tenure to academics who are “one of the very tiny handful of top investigators in your field, in the world” thus rewarding but also retaining through the incentive of tenure, staff who bring international prestige to MIT. ((Unraveling tenure at MIT))  Through an accumulation of “top investigators”, effort and attention is increasingly diverted from individual achievement and reputation to the achievements of the institution, measured by its overall reputation, which is rewarded by increased government funding, commercial partnerships and philanthropic donations. This, in turn, attracts a greater number of better staff and students, who join the university in order to enjoy the benefits of this reward. Yet once absorbed into the labour process, these individuals serve the social character of the institution, which is constantly being monitored and evaluated through a system of league tables.

“…the process of personification of capital that I have been describing is the flip side of a process in which human persons come to be treated as commodities – the worker, as human subject, sells labour as an object. As relations of production are reified so things are personified – human subjects become objects and objects become subjects – an irrational, “bewitched, distorted and upside-down world” in which “Monsieur le Capital” takes the form of a social character – a dramatis personae on the economic stage, no less.” ((Neocleous, 2003, p. 159))

To what extent the Open Education movement can counteract this personification of educational institutions and the subtle objectification of their staff and students, is still open to question, although the overwhelming trend so far is for OER to be seen as sustainable only to the extent that it can attract private and state funding, which, needless to say, serves the reputational character (a significant source of value, according to Neocleous) of the respective universities, as institutions for the ‘public good’. Yet, as Postone has argued, the creation of this temporally determined form of value is achieved through the domination of people by time, structuring our lives and mediating our social relations. The increased use of technology is, and always has been, capitalism’s principle technique of ‘improving’ the input ratio of labour-power measured abstractly by time, to the output of value, which is itself temporal and therefore in constant need of expansion. And so the imperative of conjuring value out of labour goes on…

The Student as Producer

We were recently unsuccessful in an application to JISC for a Learning and Teaching Innovation Grant. Nevertheless, the project is one that we’re keen on pursuing in some shape or form, so I thought I post the details here and invite comment.

Continue reading “The Student as Producer”