Hacking, war and the university

In each of the posts in this series about the role of the university in the development of hacker culture, I have indicated that central to a history of hacking should be a greater understanding of the role of military research funding. The role of federal funding from government agencies such as the Dept. of Defence looms so large in the history of hacking that I assumed it would be one of the first posts I wrote but I found that in order to understand funding of this type, I had to explore the history of US higher education, in particular the purpose of the Morrill Act and how it led to the development of universities whose remit was initially ‘applied’ scientific research and vocational training, in contrast to the teaching universities of the mid-nineteenth century, such as Harvard and Columbia. The Land Grant universities’ focus on applied science and a mandated responsibility to the development of their local regions led to research activity that became increasingly entrepreneurial over the decades culminating in the development of the Bayh-Dole Act in the late 1970s during a period of economic decline. Similarly, it was economic conditions during the 1920s that led to the development of a model for handling industrial contracts at MIT which was later used for handling federal funding across several universities during WWII (Etzkowitz, 2002; Lowen, 1997).

The defence-funded ‘AI Lab’ where Richard Stallman worked between 1971 and 1984, must be situated within a complex association of projects, people and funding arrangements at MIT that stretches back to the turn of the nineteenth century. The fact that hacker culture at MIT during the 1960s and 70s was wholly reliant on military funding has been acknowledged but not studied in the existing literature on hacking and the extent to which it was a product of university-military-industrial relations is an area for further study.

Before World War II

Federal funding to US universities was not a significant source of research income until the second World War. Lowen (1997) and Etzkowitz (2002) point to the experience of the First World War and then the Great Depression as stimuli for the closer relationship between universities and federal government. MIT President, Karl Compton, and Vannevar Bush, at that time Dean of MIT’s School of Engineering and former Vice President of MIT, were among a group of academics who “were dissatisfied with the military’s use of academic science during World War I”. (Etzkowitz, 2002, 42) This dissatisfaction should be understood in the context of an eventual shift in science policy leadership from agriculturalists to physicists during the inter-war years (Pielke Jr, 2012). Compton and Bush sought to establish an agency under the control of academics that would liaise with the military and transfer their innovations to a future war effort. Around this time, MIT lost the state funding that had originated with its land grant and entered a financial crisis which almost led MIT to become part of Harvard’s engineering school. To avoid this embarrassment, MIT’s leaders made a conscious effort to develop relations with industry and by the 1930s, the Institute had developed policies for patenting and consulting practices, as well as appealing to alumni networks.

In 1919, MIT implemented a ‘Technology Plan’ in an effort to raise the $8m required to save the Institute. As a beneficiary of many MIT graduates, George Eastman (of Eastman and Kodak) provided half of this sum. Yet despite this support, the Technology Plan was only a partial success with interest from other companies dwindling after the initial contracts expired – after all, MIT were now charging for research services they once provided for free to industry. By 1939, Etzkowitz notes, “it was accepted that the Technology Plan was a failure.” (45). However, the legacy of the plan was much greater as it established an office that negotiated research contracts with industry and this was then used as a model for how government transferred funds to MIT and a few other universities during World War II.

War-time government funding

By the time World War II began, leading academics such as Vannevar Bush, who was by then Head of the Carnegie Institute of Washington, had successfully lobbied government to create a federal agency to co-ordinate military research. In contrast to the relatively low position accorded to academic scientists during the First World War, Bush and others sought to place academics at the heart of government policy-making through the establishment of the National Defense Research Committee (NDRC) (1940-1). The composition of this ground-breaking committee was revealing: of the eight original members, four were academics, two were from the military, one from business and another the US Commissioner for Patents, underlining the strategic relationship between government, industry and the academy (see LoC records). The most significant achievement of the NDRC’s short history was the formation of the MIT Radiation Lab (‘Rad Lab’), which developed radar technology during the war. The Rad Lab (1940-45) was shut down at the end of the war, but became the model for future ‘labs’ at MIT and elsewhere, such that there is a ‘genealogy’ of labs (such as the AI Lab), projects (e.g. ‘Project MAC’) and people (like Richard Stallman) that can be traced back to the Rad Lab and the NDRC.

In 1941, the NRDC was superseded by the Office of Scientific Research and Development (OSRD) (1941-7), led by Vannevar Bush. The OSRD was a fully-fledged funding agency for distributing public money to support research at universities. Five universities became the main beneficiaries of this funding during the War: MIT, John Hopkins, Berkeley, Chicago and Columbia, and the OSRD co-ordinated a mass migration of scientists from universities across the country to work at one of these select centres of research.

The increase in research funding during the period of WWII was huge. Mowery et al (2004) show that federal R&D funding went from $784.9m to $12.4bn during the 1940-45 period, more than a fifteen-fold increase (all figures from Mowery et al are in 1996 dollars).  MIT was the largest single recipient ($886m), receiving almost seven times more than Western Electric who were the largest commercial recipient ($130m) (Mowery, 2004, 22). Consequently, the contractual arrangements developed at MIT prior to and during WWII, and the level of funding administered on behalf of the federal government, fundamentally changed the relationship between government and universities. The success of this arrangement led to President Roosevelt requesting Vannevar Bush to draft the famous policy report, Science: The Endless Frontier (1945), where he argued that “basic research” was the basis for economic growth which remains a common though questionable assumption today (Pielke Jr, 2012).

Post-war funding

Despite a brief dip in funding immediately after the war when the OSRD was dissolved and discussions took place over the formation of a new peace-time agency, by 1965 federal funding accounted for 73% of all academic R&D funding to US universities, compared to just 24% in 1935. Post-war funding was dominated by two agencies: defence and health, with military-related funding being split between the Dept. of Defence, NASA and the Dept. of Energy. During the 1960s and 70s “golden age” of hacking at MIT, the overall level of federal funding to universities fluctuated between 73% of all university R&D funding in 1965 to 63% in 1985, by which time a greater percentage of income was being derived from industry, assisted by the Bayh-Dole Act. The Second World War solved MIT’s inter-war financial crisis as Forman has noted:

MIT, on the inside track, emerged from the war with a staff twice as large as it had had before the war, a budget (in current dollars) four times as large, and a research budget ten times as large – 85% from the military services and their nuclear weaponeer, the AEC.

An examination of the funding arrangements for academic R&D during the post-WWI period to the Bayh-Dole Act in 1980 reveals dramatic change, not only in the amount of public money being transferred to universities, but also in the way that academic scientists developed much closer relationships with government and re-conceptualised the idea, practice and purpose of science. A new ideology of science was formed, encapsulated by its chief architect, Vannevar Bush in Science: The Endless Frontier, which redefined the “social contract” between scientists and government and argued for the importance of funding for “basic research”. Throughout these developments, dramatic changes were also taking place in the institutional forms of universities and the movement of academic labour from institution to institution and from research project to research project. So-called ‘labs’, like MIT’s Lincoln Lab were large semi-autonomous organisations in themselves, employing thousands of researchers and assistants. They became the model for later ‘science parks’ and spawned projects and research groups which then became independent ‘labs’ with staff of their own, such as the AI Lab. The University of Stanford learned from this model and it arguably led to the creation of Silicone Valley (Etzkowitz, 2002, Gillmor, 2004).

The AI Lab where Richard Stallman worked from 1971-1984, is legendary in the history of hacking (Levy, 1984). Like many MIT labs, it’s origins can be traced back to the Rad Lab through the Lincoln Lab and Research Laboratory of Electronics (RLE), where some of its personnel formerly worked and developed their thinking around Artificial Intelligence. The AI Lab began as a research group within Project MAC (Multiple Access Computer and Machine-Aided Cognition). Project MAC was set up in 1963 and originally led by Robert Fano, who had worked in the Rad Lab. J.C.R. Licklider, who helped establish the Lincoln Lab and worked at RLE, succeeded Fano as Director of Project MAC in 1968, having worked for DARPA, an agency of the Dept. of Defence, since 1962 and was responsible for the original Project MAC grant. Licklider remained Director of Project MAC until 1971, a year after Marvin Minsky, who worked in Project MAC’s AI research group, led the split to form the AI Lab in 1970, shortly before Stallman arrived as a Research Assistant. In this pre-history of hacker culture, little more needs to be said about the AI Lab as it is well documented in Levy’s book but what I wish to underline is the extent to which the AI Lab and Stallman’s ‘Garden of Eden’ was the strategic outcome of institutional, government and commercial relationships stretching back to the NRDC and the Rad Lab.

A “triple helix” or an “iron triangle”?

To sketch the intertwining history of such labs and projects at MIT alone is not straightforward, and a preliminary effort to do so shows, as one might expect, a great deal of institutional dynamism over the years. As economics conditions and government funding priorities shifted, institutions responded by re-aligning their focus all the while lobbying government and coaxing industry. Etzkowitz refers to this as the ‘triple helix’ of university-industry-government relations and evidence of a “second academic revolution”. Others have been more critical, referring to the “military-industrial-academic complex” – apparently Eisenhower’s original phrase – (Giroux, 2007), and “the “iron triangle” of self-perpetuating academic, industrial and military collaboration.” (Edwards, 1997, referring to Adams, 1982). From every perspective, there is no doubt that these changes gradually took place, spurred on at times by WWII and the Cold War. US universities (and later other national systems of HE) initially incorporated research as a social function of higher education (revolution #1) and then moved to “making findings from an academic laboratory into a marketable product” (revolution #2).  (e.g. Etzkowitz, 1997, 2001) Today, each university such as my own, has an ‘enterprise strategy’, ‘income generation’ targets and various other instruments, which can be traced back to the model that MIT established in the 1920s.

Although the accounts of Etzkowitz and Mowery et al are compelling, they only provide cursory mention of the struggle that has taken place over the years as the university has increased its ties with the military and industry. In particular, such accounts rarely dwell on the opposition and concern within academia to the receipt of large sums of defence funding and the ways in which academics circumvented and subverted their complicit role in this culture. A number of books have been written which do critically examine this ‘second revolution’ or the “iron triangle” (e.g. Edwards, 1997; Leslie, 1993; Heims, 1991; Chomsky et al, 1997; Giroux, 2007; Simpson et al, 1998; Noble, 1977; Turner, 2006; Mindell, 2002; Wisnioski, 2012).

As these critics’ accounts have shown, there has always been a great deal of unease and at times dissent among students and staff at MIT and other universities which were recipients of large amounts of military funding. Although I do not wish to generalise the MIT hackers of the 1960s and 70s as overtly political, they clearly were acting against the constraints of an intensifying managerialism within institutions across the US and in particular the rationalisation of institutional life pioneered by the Engineering profession and its ties with corporate America (Noble, 1977). Hackers’ attraction to time-sharing systems, the ability to personalise computing, programmatic access to the underlying components of computers and the use of computers for leisure activities is characteristic of a sub-culture within the university (Levy, 1985; Wisnioski, 2012) and to some extent the developing counter-culture of that period (Turner, 2006). Such accounts, I think, are vitally important to understanding the development of hacker culture as are the more moderate accounts of federal funding and the development of the entrepreneurial university.

My final post in this series highlights the relationship between venture capital, the university and hacking.

Hacking and the commercialisation of scientific research

I began this series of blog posts (my notes for a journal article), first outlining what might be considered the ‘flight of hackers’ from the university in the early 1980s, with the aim to then work backwards and establish the role of ‘the university’ (e.g. academia) in the development of hacker culture.  My second post began to focus on the role of MIT in particular, as a model of the ‘entrepreneurial university’ which other US universities copied and the generalisation of this model through the Bayh-Dole Act in 1980s. Next, I had intended to move on to discuss the role of military funding which underwrote the AI lab at MIT where Richard Stallman, “the last of the true hackers”, worked (Levy, 1984). However, I will leave that blog post for another day as there is more to say on the commercialisation of scientific research up to 1980, which I would argue played a significant role in the birth of hacking in academia (often regarded as 1961) and its agonising split when Stallman left his ‘Garden of Eden’ at MIT in 1984.

Until now, I have been drawing heavily on the work of Etzkowitz (2004), who has written about the rise of ‘entrepreneurial science’ at MIT and then Stanford. He draws upon the work of Mowery et al (2004) who provide an excellent account of the growth of patenting up to and in light of the Bayh-Dole Act. My interest is in their discussion of patenting prior to the 1980 Act, just four years before Stallman left MIT. As I wrote in my previous post, Stallman does not think that the Bahy-Dole Act had a direct impact on the “software war of 1982/83″, which makes sense in light of both Etzkowitz’s and Mowery’s accounts. By the time of the Bayh-Dole Act, MIT had been gradually internalising the commercialisation of its academic endeavours for decades, as had many other large research universities in the US, and Mowery concludes that the effect of the Act has been “exaggerated”  and that “much of the post-1980 upsurge in university patenting and licensing, we believe, would have occurred without the Act and reflects broader developments in federal policy and academic research.”

In this post, I want to highlight those broader developments in order to provide a richer account of the development of hacker culture, which although took flight from the university in 1984, has very much returned in the last decade with the growth of the ‘openness’ agenda and the development of initiatives such as open education, OER, MOOCs and open data.

Of course, hackers never left the university entirely, but the early 1980s does seem to mark a point where hacker culture assumed an independence from academic culture, a division we might relate to the later tension between ‘free software’ and ‘open source’ hackers. This tension between ‘freedom’ and ‘openness’ has been described by Stallman as a conflict in emphasis between the “ideas of freedom, community, and principle” (free software) and “the potential to make high quality, powerful software” (open source). Although the free software hackers have never wholly shunned the support of business, it is clear that Stallman believes the primary focus should be a moral and ethical one and that an emphasis on business concerns “can be disastrous” to the ideals of the free software movement.

This value-based conflict over the relationship between hackers and business is also found among academics today, with some resisting the gradual move to an ‘entrepreneurial university’ model, while others welcome it (see Etzkowitz 1998, 2000a, 2000b, 2001, 2002, 2003). In the US, the rise of ‘entrepreneurial science’ can be traced right back to the founding of the Land Grant universities, which I mentioned in my earlier post. Here, I want to focus specifically on the key instrument by which the commercialisation of science takes place: patents and their use in ‘technology transfer’ to industry. I should note that terms such as ‘entrepreneurial university’ and ‘technology transfer’ are not value-free and through discussing their historical development we might subject the development of hacker culture to a similar critique that Slaughter and Leslie have applied to ‘Academic Capitalism‘. In this post, I am developing the basis for that critique.

Patents as public good

Chapters 2-4 of Mowery’s book covers the history of patenting by US universities in great detail, pointing to the Morrill Act (1862) and the remit of Land Grant universities to serve their local regions by supporting agriculture and engineering (the ‘mechanical arts’). The book’s authors point to key “structural characteristics” of US higher education which laid the groundwork for later commercialisation of scientific research. First, with the introduction of the land grants, US high education has been notable for its scale and the autonomy of its institutions, devolving the responsibility of administering federal funds to the respective state governments. However, this autonomy came with a keenly felt responsibility to the local region and the founders and later Presidents of Land Grant universities, like MIT, understood their obligation to meet the needs of their local communities. This is evident in the land grant universities’ “utilitarian orientation to science” (10) and tendency to provide vocational education, combining training with research in methods to improve agriculture (12). Finally, US higher education was characterised by “the emergence of a unified national market for faculty at US research universities.” (13) Compared to other national systems of higher education, the departmental structures of US universities and the corresponding division into disciplinary degree programmes meant that academics focused on their contribution to their discipline over and above their institution. This resulted in a greater inter-institutional movement among academics and therefore a greater diffusion of ideas and research practices. Combined with the tendency to applied science and vocational education, this also led to a “rapid dissemination of new research findings into industrial practice – the movement of graduates into industrial employment.” (13) Mowery argues that these characteristics of US higher education

“created powerful incentives for university researchers and administrators to establish close relationships with industry. They also motivated university researchers to seek commercial applications for university developed inventions, regardless of the presence or absence of formal patent protection.” (13).

In effect, the discipline of engineering and the practice of applied science became institutionalised within US higher education, with MIT, founded in 1865, being one of the first universities to offer engineering courses. By offering its first electrical engineering course in 1882, “schools like MIT had become the chief suppliers of electrical engineers” (p 15, Mowery quoting Wildes and Lingren) in the US by the 1890s, meeting a national need by an emerging electricity-based industries. I will address the growth of Engineering as a discipline and the political tension within the discipline in a later post as it seems to me that a counter-culture among Engineers can be found in hackers today.

The moral dilemma that Stallman faced during the “software wars of 1982/83″ is familiar to many academics and the “patent problem” has been the subject of much heated debate throughout the history of the modern university (see Mowery, ch. 3). In the US, although universities have worked in collaboration with industry since the founding of the land grant institutions, they remained sensitive to the handling of commercial contracts until the early 1970s, when the commercialisation of science was internalised in the structures and processes of university research administration. Debates often focused around the pros and cons of patenting inventions derived from research, with some academics believing that patents were necessary to protect the reputation of the institutions, for fear that the invention might be “wrongfully appropriated” by a “patent pirate” (Mowery, p. 36). Thus, the argument for patenting research inventions was based on the necessity of ‘quality control’, thereby preventing the “incompetent exploitation of academic research that might discredit the research results and the university.” (37). This view saw patents as a way to “enhance the public good” and “advance social welfare” by protecting the invention from “pirates” who might otherwise patent the invention themselves and charge extortionate prices. Within the early pre-WWII debates around the use of patents by US universities, it was this moral argument of protecting a public good that led to patents being licensed widely and for low or no royalties. In fact, the few universities that began to apply for patents on their inventions did so through the Research Corporation, rather than directly themselves, so as to publicly demonstrate that their work was not corrupted by money.

The Research Corporation

The Research Corporation (see Mowery, ch. 4) was established by Frederick Cottrell of the University of California at Berkeley in 1912. Cottrell had received six patents for his work on the electrostatic precipitator and felt strongly that his research would receive more widespread utility if it were patented than if it were provided to the public for free. His view was that research placed in the public domain was not exploited effectively: “what is everybody’s business is nobody’s business.” (Mowery, quoting Cottrell, p. 59). However, Cottrell did not wish to involve university administrators in the management of the patents as he also believed that this would set a dangerous precedent of too closely involving non-academics in the scientific endeavours of researchers. He worried that it would place an expectation on academics to continue to produce work of commercial value, increasing the “possibility of growing commercialism and competition between institutions and an accompanying tendency for secrecy in scientific work.” (Mowery, quoting Cottrell, p. 60)

Cottrell’s intentions appear to have been sincere. He was not interested in any significant personal accumulation of wealth derived from his patents and believed that the scientific endeavour and the public would benefit from the protection given by patents, but they required an independent organisation to manage them. Cottrell founded the Research Corporation to meet these beliefs and donated his patents to the Corporation in the form of an endowment to manage and re-distribute income received as research grants. Cottrell regarded the formation of the Research Corporation as “a sort of laboratory of patent economics” and from its inception, states Mowery, “he envisioned the Research Corporation as an entity that would develop and disseminate techniques for managing intellectual property of research universities and similar organisations.” (60)

During its 70 year history, this “laboratory of patent economics” found it difficult to sustain its activity, despite a number of changes in approach. In its early pre-WWII period, it was an incubator for commercial applications of Cottrell’s patents, employing 45 engineers within the first five years, who not only designed applications for the use of precipitators, but installed them for clients, too. When Cottrell’s endowment to the Corporation began to run out, the organisation looked to researchers in other technology fields to donate their inventions. In effect, it seems the Corporation began to acquire patents so as they could afford to keep managing existing patents with dwindling returns and continue its philanthropic mission. The Research Corporation attracted a number of donations of patents from researchers with similar philanthropic agendas, as Mowery notes:

The expanding research collaboration between US universities and industry and the related growth of science-based industry increased the volume of commercially valuable academic research in the 1920s and 1930s, resulting in more and more requests from academic inventors to the Research Corporation for assistance in the management of patenting and licensing. (62)

So, for the first couple of decades of the Corporation, much of the income which sustained the organisation came from its work relating to Cottrell’s original precipitator inventions. As these revenues decreased, the Research Corporation looked for other sources of income. This coincided with the Great Depression and a time when universities were struggling to remain solvent, which was the situation at MIT. Rather than merge with Harvard, its President, Karl Compton, charged Vannevar Bush, then Dean of MIT’s School of Engineering, with developing a patent policy for the university. With this, MIT asserted an institutional claim on any invention resulting from research funded by the university. However, the patent committee recommended that MIT should be relieved “of all responsibility in connection with the exploitation of inventions while providing for a reasonable proportionate return to the Institute in all cases in which profit shall ensue.” (Mowery, 64) To undertake this, MIT drew up an ‘Invention Administration Agreement’ (IAA) with the Research Corporation, which not only created a precedent for other universities, but also marked a clear shift from the individual ownership of research inventions, many of which were donated to the Corporation by philanthropic academics, to institutional ownership, which anticipated an income from that research (a 60/40 split between MIT and the Corporation). As a result, Cottrell’s original vision of creating an independent charitable organisation that turned patent income into grants for further scientific work, had to meet the challenges of the Depression and the the unpredictable nature of successfully exploiting research.

MIT institutionalised the relationship with Research Corporation, using it to exclusively manage its patents from 1937 to 1946, eventually cancelling its contract with the Corporation in 1963, by which time concerns about directly managing the commercial exploitation of its research had largely disappeared and the in-house skills to undertake the necessary administration had been developed over the course of their relationship with the Research Corporation. The partnership between MIT and the Research Corporation was never very profitable, with the Corporation making net losses during the decade that it exclusively managed MIT’s patents. However, during and following WWII, the scale of research activity in US universities markedly increased. Mowery notes that

the expansion of military and biomedical research conducted in US universities during and after the war had increased the pool of potentially patentable academic inventions, and federal funding agencies compelled universities to develop formal patent policies during the early post-war period. The Research Corporation negotiated IAAs, modelled on the MIT agreement, with several hundred other US universities during the 1940s and 1950s. (66)

The history of the Research Corporation, as told by Mowery et al, is a fascinating one, pointing to the difficulties in successfully commercialising research through the licensing of patents. During 1945 to 1980 the top five patents held by the Corporation accounted for the majority of its income and “although its portfolio of institutional agreements, patents, and licenses grew during the 1950s, growth in net revenues proved elusive.” (69)

The latter years of the Research Corporation were spent trying to build relationships with university staff in an effort to develop the necessary skills to identify potentially commercial inventions across different research disciplines. Ironically, in its attempt to off-load some of the administrative costs to institutions the Corporation effectively trained university administrators to manage without its assistance, eroding the competitive advantage that the Corporation previously held. During the 1970s, universities were also ‘cherry picking’ inventions to patent themselves, rather than the Research Corporation, in an effort to benefit from all of the potential revenue rather than a cut of it. “The Research Corporation’s 1975 Annual Report noted that many universities were beginning to withhold valuable inventions.” (Mowery, 77) This can be seen as a clear indication that the earlier concerns about universities directly exploiting their research had been largely overcome, and that during the 1960s and 1970s, the institutional structures and skills within the larger research universities like MIT, had been put in place, partly with the assistance of the Research Corporation.

Conclusion

The institutionalised commercialisation of research at MIT began in the 1930s, when MIT had developed one of the first university patent policies, clearly indicating that the institution had a claim to the profits deriving from its research activity. Richard Stallman joined the DARPA-funded AI Lab at MIT as a Research Assistant in 1971, eight years after MIT had cancelled its Agreement with the Research Corporation and fully internalised the process of identifying and managing patents. In this respect, MIT was at the forefront of a movement among US universities to systematically commercialise their research – to engage in ‘entrepreneurial science’ – where research groups are run as de facto firms (Etkowitz 2003). The military-funded work in Artificial Intelligence during the 1970s, which Stallman contributed to, can be understood within the context of the academy’s role in supporting the Cold War effort (Leslie, 1994; Chomsky et al, 1997; Simpson, 1998). This programme of funded research across a number of disciplines consequently increased the number of commercial opportunities (‘technology transfers’), not least in the fields of electronics, engineering and the emerging discipline of computer science. Indeed, Symbolics, the company which was spun off from the AI Lab in the early 1980s, attracting most of Stallman’s fellow hackers, produced Lisp Machines for the Cold War military market in Artificial Intelligence, eventually going bust when the Cold War ended.

My point in discussing the rise in the use of patents to exploit government funded research in US universities during the twentieth century is to show how the split that took place in the AI Lab in the early 1980s, devastating Stallman and compelling him to leave, was a result of a long process of US universities, led by MIT, internalising the idea, skills and processes by which to make money from research. Just as the development of Land Grant universities and the practice of applied science, patronised by vast sums of government funding, gave birth to hacker culture in the early 1960s, that culture remained tied to structural changes taking place within US higher education during the 1960s and 1970s and a shift towards entrepreneurialism. Stallman’s ‘Garden of Eden’ was, I think, always going to be a short-lived experience as he joined MIT at the beginning of a decade where government funding from the three defence, space and energy agencies was on the decline from a peak of 80% of all federal funding in 1954 to 30% in 1970. As funding in these areas was on the decline and the licensing of patents and overall share of research funding coming from industry was on the rise (see Mowery et al 23-27), it seems inevitable that the institution which had given birth to hacking in the early 1960s would try to valorise the work of these researchers as optimally as it could. Stallman has said that he and his colleagues did not object to the commercialisation of their work, but the instruments of this advancing entrepreneurialism (patents, copyright, licenses) were at odds with at least one of the long established “institutional imperatives” of scientific practice: “Communism” (Merton, 1973).

In a sincere yet novel way, Frederick Cottrell recognised this in 1912, when he established the Research Corporation as a charity and donated his patents so as to benefit public social welfare and provide philanthropic grants for further scientific work. However, twenty years later, in the midst of the Depression, MIT asserted institutional interest in the ‘intellectual property’ of its researchers and sought a majority cut of the income deriving from its patents. It took a further three decades or so for MIT to relinquish the use of the Research Corporation altogether and fully institutionalise the commercial exploitation of scientific research. Writing in 1973, Merton’s “communism” as a foundation of the scientific ethos seems both an ironic use of the term given that most scientific research in the US was being funded through the Cold War agencies, and removed from the reality of what was happening within institutions as they advanced ‘entrepreneurial science’. Merton understood this, and his description of the “communal character of science” (Merton, 274) surely refers more to a liberal ideal than actual practice, just as Stallman’s characterisation of ‘freedom’ draws heavily on liberal political philosophy but is continuously confronted with the reality of capitalist valorisation. A blog post for another day…