The role of the university in the development of hacker culture

PDP-10
A PDP-10 computer from the 1970s.

The picture above is of a PDP-10 computer similar to that found in universities during the 1970s. The PDP-10 was developed between 1968-1983 by Digital Equipment Corporation (DEC) and is a useful point of reference for looking backwards and forwards through the history of hacking.  The PDP-10 (and its predecessor the PDP-6) were two of the first ‘time-sharing‘ computers, which among other significant technological developments, increased access to computers at MIT. Hackers working in the MIT Artifical Intelligence Lab (AI Lab) wrote their own operating system for the PDP-6 and PDP-10 called ITS, the Incompatible Timesharing System, to replace the Compatible Time Sharing System (CTSS) developed by MIT’s Computation Centre. Richard Stallman, who Levy describes as “the last true hacker”, was a graduate student and then AI Lab staff system hacker who devoted his time to improving ITS and writing applications to run on the computer. Stallman describes the Lab during the 13 years he worked there as “like the Garden of Eden”, a paradise where a community of hackers shared their work.

However, this period came to a bitter end in 1981, when most of the hackers working with Stallman left to join two companies spun off from the Lab. Four left to join Lisp Machines, Inc. (LMI), led by Stallman’s mentor, the ‘hacker’s hacker’, Richard Greenblatt, while 14 of the AI Lab staff left to join Symbolics, Inc. a company led by Russell Noftsker, who was Head of the AI Lab for eight years and had hired Stallman. (Noftsker had taken over from the original Director, Marvin Minsky, who worked on MIT’s DARPA-funded Project MAC, which later became the AI Lab). For a while in 1979, Noftsker and Greenblatt discussed setting up a company together that sold Lisp Machines, but they disagreed on how to initially fund the business. Greenblatt wanted to rely on reinvesting early customer orders and retain full control over the company while Noftsker was keen to use a larger amount of venture capital, accepting that some control of the company would be given up to the investors. Greenblatt and Noftsker couldn’t agree and so set up companies independent of each other, attracting most of the ITS hackers in the AI Lab to the extent that Stallman’s beloved community collapsed. With maintenance and development of ITS decimated, administrators of the AI Lab decided to switch to TOPS-20, DEC’s proprietary operating system when a new PDP-10 was purchased in 1982. A year later, DEC ceased production of the PDP-10 which Stallman described as “the last nail in the coffin of ITS; 15 years of work went up in smoke.”

Lisp Machines went bankrupt in 1985 while Symbolics remained active until the end of the Cold War when the military’s appetite for AI technologies slowed down and subsequently the company declined. One more thing worth noting about these two AI Lab spin-offs is that within a year of doing business, Stallman and Symbolics clashed over matters of sharing code. Having been deserted by his fellow hackers, Stallman made efforts to ensure that everyone continued to benefit from Symbolics enhancements to the Lisp Machine code, regularly merging Symbolics code with MIT’s version, which Greenblatt’s company used. Stallman was acting as a middle-man between the two code bases and the split community of hackers. Like other MIT customers, Symbolics licensed the Lisp Machine code from MIT and began to insist that their changes to the source code could not be redistributed beyond MIT, thereby cutting off Greenblatt’s Lisp Machines, Inc. and other MIT customers. Stallman’s efforts to keep the old AI Lab hacker community together through the sharing of distributed code came to an end.

In an essay by Stallman, he writes about how this was a defining moment in his life from which he resolved to start the GNU Project and write his own ‘free’ operating system. In 1984, Stallman left his job at MIT so as to ensure that the university didn’t have any claim to the copyright of his work, however he remained as a guest of the AI Lab at the invitation of the Director, Patrick Winston, and still does so today. If you are at all familiar with the history of free software and the open source movement, you will know that Stallman went on to develop the General Public License in the late 1980s, which has become the most popular open source license in use today. Advocates of open education will know that the GPL was the inspiration for the development of Creative Commons licenses in 2001. Arguably, the impact of spinning off Lisp Machines and Symbolics from the AI Lab in 1981 is still being felt and the 18 hackers that left to join those divergent startups can be considered as paradigmatic for many hackers since, conscious of whether they are working on shared, open source software or proprietary software.

Everything I have described above can be easily pieced together in a few hours from existing sources. What is never discussed in the literature of hacking is the institutional, political and legal climate during the late 1970s and early 1980s, and indeed the decades prior to this period that led to that moment for Stallman in 1984. In fact, most histories of hacking begin at MIT in 1961 with the Tech Model Railroad Club and, understandably, concentrate on the personalities and development of an ethic within the hacker community. What is never mentioned is what led to Greenblatt and Noftsker deciding to leave that ‘Garden of Eden’ and establish firms. What instruments at that time encouraged and permitted these men to commercialise their work at MIT? Much of what I have written above can be unravelled several decades to show how instrumental the development of higher education in the USA during the 20th century was to the creation of a hacker culture. The commercialisation of applied research; the development of Cybernetic theory and its influence on systems thinking, information theory and Artificial Intelligence; the vast sums of government defense funding poured into institutions such as MIT since WWII; the creation of the first venture capital firm by Harvard and MIT; and most recently, the success of Y-Combinator, the seed investment firm that initially sought to fund student hackers during their summer break, are all part of the historiography of hacking and the university.

Over the next few blog posts I will attempt to critically develop this narrative in more detail, starting with a discussion of the Bayh-Dole Act, introduced in 1980.

References

I’ve linked almost exclusively to Wikipedia articles in this blog post. It’s a convenient source that allows one to quickly sketch an idea. Much needs to be done to verify that information. There are a few books worth pointing out at this introductory stage of the narrative I’m trying to develop.

The classic journalistic account of the history of hacking is by Stephen Levy (1984) Hackers. Heroes of the Computer Revolution. I found this book fascinating, but it begins in 1958 with the Tech Model Railroad Club (chapter 1) and doesn’t offer any real discussion about the institutional and political cultures of the time which allowed a ‘hacker ethic’ (chapter 2) to emerge.

Eric Raymond’s writings are also worth reading. Raymond is a programmer and as a member of the hacker tradition has made several attempts to document it, including the classic account of the Linux open source project, The Cathedral and the Bazaar, and as Editor of the Jargon File, a glossary of hacker slang. Again, Raymond’s Brief History of Hackerdom, begins in the early 1960s with the Tech Model Railroad Club and does not reflect on the events leading up to that moment in history.

Another useful and influential book on hackers and hacking is Himanen (2001) The Hacker Ethic. Himanen is a sociologist and examines the meaning of the work of hackers and their values in light of the Protestant work ethic.

Tim Jordan’s 2008 book, Hacking, is a general introduction to hacker and cracker culture and provides an insightful and useful discussion around hacking and technological determinism. Like Himanen, Tim Jordan is also a sociologist.

Stallman’s book, Free Software Free Society (2002), offers a useful direct account of his time at MIT in chapter 1.

Sam Williams’ biography of Stallman, Free as In Freedom (2002), later revised by Stallman in collaboration with Williams (2010), is essential reading. Chapter 7 ‘A Stark Moral Choice’, offers a good account of the break up of Stallman’s hacker paradise in the early 1980s.

E. Gabriella Coleman’s book, Coding Freedom. The Ethics and Aesthetics of Hacking (2012) is an anthropological study of hackers, in particular the free software hackers of the Debian Linux/GNU operating system. Coleman’s book is especially useful as she identifies hackers and hacking as a liberal critique of liberalism. This might then be usefully extended to other movements that hackers have influenced such as Creative Commons.

Reading The Cybernetic Hypothesis

Tiqqun was a French journal that published two issues in 1999 and 2001. ((http://www.archive.org/details/Tiqqun1 ; http://www.archive.org/details/Tiqqun2)) The authors wrote as an editorial collective of seven people in the first edition and went uncredited in the second edition. More recently, one member of the original collective, Fulvia Carnevale, has said that:

I would like to say that Tiqqun is not an author, first of all. Tiqqun was a space for experimentation. It was an attempt at bridging the gap between theory and a number of practices and certain ways of “being together”. It was something that existed for a certain time and that then stopped because the people working at it weren’t happy with the relation between theory and practice and that certain people had decided that Tiqqun 3 would be a movie. ((See the interview with Agamben. http://www.dailymotion.com/video/x929gp A video of the Q&A which followed his talk has since been removed but an English transcript of both the talk and Q&A can be found here: http://anarchistwithoutcontent.wordpress.com/2010/04/18/tiqqun-apocrypha-repost/))

This space for experimentation amounted to to 450 pages over three years, producing several substantial texts such as Bloom Theory, Introduction to Civil War, and The Cybernetic Hypothesis. ((Semiotext(e) (MIT Press) has recently published Introduction to Civil War and How is it to be done? in a single volume. A growing number of translations can be found on the web. The best source for these in English is: http://tiqqunista.jottit.com/))

Published in Tiqqun 2, The Cybernetic Hypothesis is forty-three pages long (in the original journal) and divided into eleven sections. Each section begins with one or two quotes which are then critiqued in order to further our understanding of the hypothesis and develop the author’s response. The author(s) write in the first person singular. They quote from a range of sources but do not offer precise references.

What follows are my notes on the text. A much more extended version of my notes is available here. Neither of these are a review of the text, simply a summary of my reading of each section.

Section one provides historical references for the objectives of cybernetics and argues that as a political capitalist project it has supplanted liberalism as both a paradigm and technique of government that aims to dissolve human subjectivity into a rationalised and stable (i.e. inoffensive) totality through the automated capture of increasingly transparent flows of information and communication. The authors understand this subjugation of subjectivity as an offensive, anti-human act of war which must be counteracted.

Section two establishes cybernetics as the theoretical and technological outcome and continuation of a state of war, in which stability and control are its objectives. Developing with the emergence of post-war information and communication theory and corresponding innovation in computer software and hardware, intelligence is abstracted from the human population as generalised representations that are retained and communicated back to individuals in a commodified form. This feedback loop is understood as a ‘system’ and later as a naturalised ‘network’ which, drawing on the 19th century thermodynamic law of entropy, is at continual risk of degradation and must therefore be reinforced by the development of cybernetics itself.

Section three ends with a useful summary of its own:

The Internet simultaneously permits one to know consumer preferences and to condition them with advertising. On another level, all information regarding the behaviour of economic agents circulates in the form of headings managed by financial markets. Each actor in capitalist valorization is a real-time back-up of quasi-permanent feedback loops. On the real markets, as on the virtual markets, each transaction now gives rise to a circulation of information concerning the subjects and objects of the exchange that goes beyond simply fixing the price, which has become a secondary aspect. On the one hand, people have realized the importance of information as a factor in production distinct from labour and capital and playing a decisive role in “growth” in the form of knowledge, technical innovation, and distributed capacities. On the other, the sector specializing in the production of information has not ceased to increase in size. In light of its reciprocal reinforcement of these two tendencies, today’s capitalism should be called the information economy. Information has become wealth to be extracted and accumulated, transforming capitalism into a simple auxiliary of cybernetics. The relationship between capitalism and cybernetics has inverted over the course of the century: whereas after the 1929 crisis, PEOPLE built a system of information concerning economic activity in order to serve the needs of regulation – this was the objective of all planning – for the economy after the 1973 crisis, the social self-regulation process came to be based on the valorization of information.

Section four focuses on the role of information to both terrorise and control people. The sphere of circulation of commodities/information is increasingly seen as a source of profit and as this circulation accelerated with the development of mass transportation and communication, so the risk of disruption to the flow of commodities/information became more of a threat. In cybernetics, total transparency is seen as a means of control yet because the removal of risk is never absolutely possible, citizens are understood as both presenting a risk to the system and a means to regulate that risk through self-control. Control is therefore socialised and now defines the real-time information society. An awareness of risk brings with it an awareness of the vulnerability of a system that is dependent on an accelerated circulation/flow of information. Time/duration is a weakness and disruption to time is signalled as an opportunity to halt the flow and therefore the project of cybernetic capitalism.

Section five is a critique of socialism and the ecology movement proposing how these two movements have been subsumed by cybernetic capitalism. The popular forms of protest over the last 30 years have only strengthened the cybernetic objectives of social interdependence, transparency and management. This marked the second period of cybernetics which has sought to devolve the responsibility of regulation through surveillance through the affirmation of ‘citizenship’ and ‘democracy’.

Section six offers a critique of the Marxist response to cybernetic capitalism and finds it contaminated and complicit in its economism, humanism and totalising view of the world.

Section seven offers a brief critique of critical theory and finds it to be an ineffectual performance cloistered in the mythology of the Word and secretly fascinated by the cybernetic hypothesis. The section introduces insinuation as a mode of interference and tactic for overcoming the controlled circulation of communication. The author(s) indicate that the remaining sections of The Cybernetic Hypothesis are an attempt to undo the world that cybernetics constructs.

Section eight discusses panic, noise, invisibility and desire as categories of revolutionary force against the cybernetic framework. Panic is irrational behaviour that represents absolute risk to the system; noise is a distortion of behaviour in the system, neither desired behaviour nor the anticipated real behaviour. These invisible discrepancies are small variations (‘non-conforming acts’) that take place within the system and are amplified and intensified by desire. An individual acting alone has no influence, but their desire can produce an ecstatic politics which is made visible in a lifestyle which is, quite literally, attractive, with the potential to produce whole territories of revolt.

Section nine elaborates on invisibility as the preferred mode of diffuse guerilla action. A method of small selective strikes on the lines of communication followed by strategic withdrawal are preferred over large blows to institutions. Despite the distributed nature of the Internet, territorial interests have produced a conceivably vulnerable network reliant on a relatively small number of main trunks. Both individual spontaneity and the organisational abilities of institutions are valued but both should remain distant from cybernetic power and adopt a wandering course of unpredictability.

Section ten develops the author(s) tactics for countering cybernetic capitalism, through the application of slowness, disruptive rhythms, and the possibilities that arise from encounters with others. The cybernetic system is a politics of rhythm which thrives on speed for stability (as was discussed in section four) and a range of predictability. The guerilla strategy is therefore one of dissonant tempos, improvisation and ‘wobbly’ rhythmic action.

Section eleven is a final attempt to define the key categories of struggle against the domination of cybernetic capitalism. These can be summarily listed as slowness, invisibility, fog, haze, interference, encounters, zones of opacity, noise, panic, rhythm/reverberation/amplification/momentum and finally, autonomy. Combined, these constitute an offensive practice against the requirement and expectation of cybernetics for transparency/clarity, predictability, and speed in terms of the information communicated and regulation of its feedbacks. The author(s) do not reject the cybernetic system outright but rather see the possibility for autonomous zones of opacity from which the invisible revolt can reverberate outwards and lead to a collapse of the cybernetic hypothesis and the rise of communism.

Originally published in French in Tiqqun II (2001). http://www.archive.org/details/Tiqqun2 Translated into English 2010 http://cybernet.jottit.com/