Last week, I wrote to Werner Bonefeld, seeking a couple of articles that were published in Common Sense. Journal of the Edinburgh Conference of Socialist Economists. This journal is pretty hard to come by these days. Back-issues are limited and relatively few of the articles exist on the web. It was published from 1987 to 1999, over 24 issues of about 100 pages each. As you can see from the image, early issues (one to nine) look more like an A4, photocopied zine than an academic journal, but later issues take the more traditional form and were distributed by AK Press. A few articles were collected and published in 2003.
In my email to Werner, I mentioned that if I could get my hands on whole issues of the journal, I would digitise them for distribution on the web. As an editor of the journal, Werner was grateful and said that copyright was not a problem. I didn’t realised that Werner would send quite so many issues of the journal, but yesterday 15 of the 24 issues of Common Sense arrived in the post, along with a copy of his recent book, Subverting the Present, Imagining the Future.
My plan is to create high quality digital, searchable, versions of every issue of Common Sense over the next few months and offer them to Werner for his website, or I can create a website for them myself. I’ve done a lot of image digitisation over the years but not text. If you have some useful advice for me, please leave a comment here. I’ll also seek advice from the Librarians here, who have experience digitising books.
I have issues 10 to 24 (though not 11) and issue five. To begin my hunt for missing copies, I’ve ordered issues 1,2 & 3 from the British Library’s Interlibrary Loan service. An email this morning told me that the BL don’t have copies of the journal and are hunting them down from other libraries. We’ll see what they come up with. If you have issues 1,2,3,4,6,7,8,9 or eleven, I’d be grateful if you’d get in touch. It would be good to digitise the full set and I’ll return any copies that I’m sent.
Why go to all this trouble?
Well, Common Sense was an important and influential journal “of and for social revolutionary theory and practice, ideas and politics.” In issue 21, reflecting on ten years of Common Sense, the editorial stated that:
Our project is class analysis and we aim to provide a platform for critical debates unfettered by conventional fragmentations of knowledge (either into ‘fields’ of knowledge or ‘types’ of knowledge, e.g. ‘academic’ and ‘non-academic’). This continuity in the concepts of class struggle and social change flies in the face of most interpretations of the last 10 years.
When the journal switched from A4 to A5 size, in May 1991 with issue ten, the editorial collective reflected on the first few years of the journal.
Common Sense was first produced in Edinburgh in 1987. It offered a direct challenge to the theory production machines of specialised academic journals, and tried to move the articulation of intellectual work beyond the collapsing discipline of the universities. It was organised according to minimalist production and editorial process which received contributions that could be photocopied and stapled together. It was reproduced in small numbers, distributed to friends, and sold at cost price in local bookshops and in a few outposts throughout the world. It maintained three interrelated commitments: to provide an open space wherein discussion could take place without regard to style or to the rigid classification of material into predefined subject areas; to articulate critical positions within the contemporary political climate; and to animate the hidden Scottish passion for general ideas. Within the context of the time, the formative impetus of Common Sense was a desire to juxtapose disparate work and to provide a continuously open space for a general critique of the societies in which we live.
The change in form that occurred with issue ten was a conscious decision to overcome the “restrictive” aspects of the minimalist attitude to production that had governed issues 1 to 9, which were filled with work by ranters, poets, philosophers, theorists, musicians, cartoonists, artists, students, teachers, writers and “whosoever could produce work that could be photocopied.” However, the change in form did not mark a conscious change in content for the journal, and the basic commitment “to pose the question of what the common sense of our age is, to articulate critical positions in the present, and to offer a space for those who have produced work that they feel should be disseminated but that would never be sanctioned by the dubious forces of the intellectual police.” Further in the editorial of issue ten, they write:
The producers of Common Sense remain committed to the journal’s original brief – to offer a venue for open discussion and to juxtapose written work without regard to style and without deferring to the restrictions of university based journals, and they hope to be able to articulate something of the common sense of the new age before us. Common Sense does not have any political programme nor does it wish to define what is political in advance. Nevertheless, we are keen to examine what is this thing called “common sense”, and we hope that you who read the journal will also make contributions whenever you feel the inclination. We feel that there is a certain imperative to think through the changes before us and to articulate new strategies before the issues that arise are hijacked by the Universities to be theories into obscurity, or by Party machines to be practised to death.
Why ‘Common Sense’?
The editorial in issue five, which you can read below, discusses why the journal was named, ‘Common Sense’.
Hopefully, if you’re new to Common Sense, like me, this has whetted your appetite for the journal and you’re looking forward to seeing it in digital form. In the meantime, you might want to read some of the work published elsewhere by members of the collective, such as Werner Bonefeld, John Holloway, Richard Gunn, Richard Noris, Alfred Mendes, Kosmas Psychopedis, Toni Negri, Nick Dyer-Witheford, Massimo De Angelis and Ana Dinerstein. If you were reading Common Sense back in the 1990s, perhaps contributed to it in some way and would like see Common Sense in digital form so that your students can read it on their expensive iPads and share it via underground file sharing networks, please have a dig around for those issues I’m missing and help me get them online.
The journal Common Sense exists as a relay station for the the exchange and dissemination of ideas. It is run on a co-operative and non-profitmaking basis. As a means of maintaining flexibility as to numbers of copies per issue, and of holding costs down, articles are reproduced in their original typescript. Common Sense is non-elitist, since anyone (or any group) with fairly modest financial resources can set up a journal along the same lines. Everything here is informal, and minimalist.
Why, as a title. ‘Common Sense’? In its usual ordinary-language meaning, the term ’common sense’ refers to that which appears obvious beyond question: “But it’s just common sense!”. According to a secondary conventional meaning, ‘common sense’ refers to a sense (a view, an understanding or outlook) which is ‘common’ inasmuch as it is widely agreed upon or shared. Our title draws upon the latter of these meanings, while at the same time qualifying it, and bears only an ironical relation to the first.
In classical thought, and more especially in Scottish eighteenth century philosophy, the term ‘common sense’ carried with it two connotations: (i) ‘common sense’ meant public of shared sense (the Latin ‘sensus comunis‘ being translated as ‘publick sense’ by Francis Hutcheson in 1728). And (ii) ‘comnon sense’ signified that sense, or capacity, which allows us to totalise or synthesise the data supplied by the five senses (sight, touch and so on) of a more familiar kind. (The conventional term ‘sixth sense‘, stripped of its mystical and spiritualistic suggestions, originates from the idea of a ‘common sense’ understood in this latter way). It is in this twofold philosophical sense of ‘common sense’ that our title is intended.
I’ve got to say, it’s one of the most difficult texts I’ve ever read, despite going between two translations in the hope of a little clarity. However, while he seems to spin a syntax of his own at times, Heidegger’s overall message is pretty clear and simple: The poetic roots of technology have been obscured by mechanisation that has compelled us to harness nature’s energy into an accumulated homogeneous reserve that conceals the true nature of things. In this world, humans too, have become resources, slaves to a process that constructs an appearance of truth rather than a revelation of the real. The solution is to question and confront technology through its forgotten roots in the arts.
Heidegger’s 32 page essay was originally a series of lectures he gave in 1949, entitled: The Thing, Enframing, The Danger, and The Turning. He begins by setting out the reasons for his questioning:
Questioning builds a way. We would be advised, therefore, above all to pay heed to the way, and not to fix our attention on isolated sentences and topics. The way is one of thinking. All ways of thinking, more or less perceptibly, lead through language in a manner that is extraordinary. We shall be questioning concerning technology, and in so doing we should like to prepare a free relationship to it. The relationship will be free if it opens our human existence to the essence of technology. When we can respond to this essence, we shall be able to experience the technological within its own bounds.
Heidegger is concerned with questioning the essence of technology and in particular, modern technology, which he understands as something different to older, pre-industrialised forms of technology. The difference, to put it crudely, is that our technological relationship with nature was once as one of steward but now is one of both master and slave. The purpose of questioning technology is therefore to break the chains of technology and be free, not in the absence of technology but through a better understanding of its essence and meaning. He suggests that there are two dominant ways of understanding technology. One is instrumental, to view it as a means to an end, while the other is to see it as human activity. He thinks they belong together.
For to posit ends and procure and utilize the means to them is a human activity. The manufacture and utilization of equipment, tools, and machines, the manufactured and used things themselves, and the needs and ends that they serve, all belong to what technology is. The whole complex of these contrivances is technology. Technology itself is a contrivance—in Latin, an instrumentum.
The current conception of technology, according to which it is a means and a human activity, can therefore be called the instrumental and anthropological definition of technology.
The instrumental view rests on a view of causality, which he breaks down into four Aristotelian causes: the material, the form, the end, and the effect. These four aspects of causality are in fact four aspects of ‘being responsible for bringing something into appearance’. They reveal that which was concealed. They are different but united by their revealing.
What has the essence of technology to do with revealing? The answer: everything. For every bringing-forth is grounded in revealing. Bringing-forth, indeed, gathers within itself the four modes of occasioning— causality—and rules them throughout. Within its domain belong end and means as well as instrumentality. Instrumentality is considered to be the fundamental characteristic of technology. If we inquire step by step into what technology, represented as means, actually is, then we shall arrive at revealing. The possibility of all productive manufacturing lies in revealing.
Technology is therefore no mere means. Technology is a way of revealing. If we give heed to this, then another whole realm for the essence of technology will open itself up to us. It is the realm of revealing, i.e., of truth.
Discussing techné, the root of ‘technology’, he observes that it encompasses both the activities and skills of the craftsman but also the arts of the mind and fine arts and concludes that techné “belongs to bringing-forth, to poiésis; it is something poetic.” Techné is also linked with the word epistémé and Heidegger states that both words “are names for knowing in the widest sense. They mean to be entirely at home in something, to understand and be expert in it.”
Such knowing provides an opening up. As an opening up it is a revealing. Aristotle, in a discussion of special importance (Nicomacheun Ethics, Bk. VI, chaps. 3 and 4), distinguishes between epistémé and techné and indeed with respect to what and how they reveal. Techné is a mode of alethéuein. It reveals whatever does not bring itself forth and does not yet lie here before us, whatever can look and turn out now one way and now another. Whoever builds a house or a ship or forges a sacrificial chalice reveals what is to be brought forth, according to the terms of the four modes of occasioning. This revealing gathers together in advance the form and the matter of ship or house, with a view to the finished thing envisaged as completed, and from this gathering determines the manner of its construction. Thus what is decisive in techné does not at all lie in making and manipulating, nor in the using of means, but rather in the revealing mentioned before. It is as revealing, and not as manufacturing, that techné is a bringing-forth.
Thus the clue to what the word techné means and to how the Greeks defined it leads us into the same context that opened itself to us when we pursued the question of what instrumentality as such in truth might be.
Technology is a mode of revealing. Technology comes to presence in the realm where revealing and unconcealment take place, where alétheia, truth, happens.
Heidegger pre-empts the accusation that this view no longer holds true for modern, machine-powered technology. In defence, he argues that modern technology, in its mutual relationship of dependency with modern physics, is also ‘revealing’.
Modern physics, as experimental, is dependent upon technical apparatus and upon progress in the building of apparatus. The establishing of this mutual relationship between technology and physics is correct. But it remains a merely historiological establishing of facts and says nothing about that in which this mutual relationship is grounded. The decisive question still remains: Of what essence is modem technology that it thinks of putting exact science to use?
What is modern technology? It too is a revealing. Only when we allow our attention to rest on this fundamental characteristic does that which is new in modern technology show itself to us.
However, the revealing of modern technology differs from that of earlier, non-machine-powered technology, in a fundamental way. It is not a revealing, an unfolding in the sense of poiésis, “the revealing that rules in modern technology is a challenging, which puts to nature the unreasonable demand that it supply energy which can be extracted and stored as such.” He then leaps into some illustrative examples:
But does this not hold true for the old windmill as well? No. Its sails do indeed turn in the wind; they are left entirely to the wind’s blowing. But the windmill does not unlock energy from the air currents in order to store it.
In contrast, a tract of land is challenged in the hauling out of coal and ore. The earth now reveals itself as a coal mining district, the soil as a mineral deposit. The field that the peasant formerly cultivated and set in order appears differently than it did when to set in order still meant to take care of and maintain. The work of the peasant does not challenge the soil of the field. In sowing grain it places seed in the keeping of the forces of growth and watches over its increase. But meanwhile even the cultivation of the field has come under the grip of another kind of setting-in-order, which sets upon nature. It sets upon it in the sense of challenging it. Agriculture is now the mechanized food industry. Air is now set upon to yield nitrogen, the earth to yield ore, ore to yield uranium, for example; uranium is set up to yield atomic energy, which can be unleashed either for destructive or for peaceful purposes.
This setting-upon that challenges the energies of nature is an expediting, and in two ways. It expedites in that it unlocks and exposes. Yet that expediting is always itself directed from the beginning toward furthering something else, i.e., toward driving on to the maximum yield at the minimum expense. The coal that has been hauled out in some mining district has not been produced in order that it may simply be at hand somewhere or other. It is being stored; that is, it is on call, ready to deliver the sun’s warmth that is stored in it. The sun’s warmth is challenged forth for heat, which in turn is ordered to deliver steam whose pressure turns the wheels that keep a factory running.
All technology reveals, but modern technology reveals not in the unfolding poetic sense but as a challenge; it sets upon nature and expedites its energy by unlocking it.
The revealing that rules throughout modern technology has the character of a setting-upon, in the sense of a challenging–forth. Such challenging happens in that the energy concealed in nature is unlocked, what is unlocked is transformed, what is transformed is stored up, what is stored up is in turn distributed, and what is distributed is switched about ever anew. Unlocking, transforming, storing, distributing, and switching about are ways of revealing. But the revealing never simply comes to an end. Neither does it run off into the indeterminate. The revealing reveals to itself its own manifoldly interlocking paths, through regulating their course. This regulating itself is, for its part, everywhere secured. Regulating and securing even become the chief characteristics of the revealing that challenges.
Once unlocked, this energy (raw or in the form of machine-powered technology) is held captive as a standing reserve. The airliner standing on the runway is a stationary object ordered to be ready for take-off. However, this apparent mastery over nature’s energy is no such thing because we are challenged, ordered, to act this way. We, in fact, like the airliner on the runway, are situated in the ‘standing reserve’ as human resources.
The forester who measures the felled timber in the woods and who to all appearances walks the forest path in the same way his grandfather did is today ordered by the industry that produces commercial woods, whether he knows it or not. He is made subordinate to the orderability of cellulose, which for its part is challenged forth by the need for paper, which is then delivered to newspapers and illustrated magazines. The latter, in their turn, set public opinion to swallowing what is printed, so that a set configuration of opinion becomes available on demand. Yet precisely because man is challenged more originally than are the energies of nature, i.e., into the process of ordering, he never is transformed into mere standing-reserve. Since man drives technology forward, he takes part in ordering as a way of revealing. ((I wonder what Marx would have to say about this. It sounds to me like Heidegger is referring to the imperative of capitalist laws of motion. cf. Ellen Meiksins Wood))
In this way, we are challenged by modern technology to approach nature “as an object of research” to reveal or “order the real as standing reserve”. Heidegger refers to this as enframing. Enframing is the essence of modern technology.
Enframing means the gathering together of the setting-upon that sets upon man, i.e., challenges him forth, to reveal the actual, in the mode of ordering, as standing-reserve. Enframing means the way of revealing that holds sway in the essence of modern technology and that is itself nothing technological. On the other hand, all those things that are so familiar to us and are standard parts of assembly, such as rods, pistons, and chassis, belong to the technological. The assembly itself, however, together with the aforementioned stockparts, fall within the sphere of technological activity. Such activity always merely responds to the challenge of enframing, but it never comprises enframing itself or brings it about.
There then follows a couple of pages which reflect on the relationship between physics and modern technology. As a 17th c. precursor to 18th c. modern technology, physics is a theory which sets up nature in a way that orders it in a coherent, self-serving manner. It is not experimental because “it applies apparatus to the questioning of nature.” The physical theory of nature is the herald of modern technology, which conceals the essence of modern technology. Technology then, in its essence as enframing, precedes physics.
Modern physics… is challenged forth by the rule of enframing, which demands that nature be orderable as standing-reserve. Hence physics, in its retreat from the kind of representation that turns only to objects, which has been the sole standard until recently, will never be able to renounce this one thing: that nature report itself in some way or other that is identifiable through calculation and that it remain orderable as a system of information. This system is then determined by a causality that has changed once again. Causality now displays neither the character of the occasioning that brings forth nor the nature of the causa efficiens, let alone that of the causa formalis. It seems as though causality is shrinking into a reporting—a reporting challenged forth—of standing-reserves that must be guaranteed either simultaneously or in sequence… Because the essence of modern technology lies in enframing, modern technology must employ exact physical science. Through its so doing the deceptive appearance arises that modern technology is applied physical science. This illusion can maintain itself precisely insofar as neither the essential provenance of modern science nor indeed the essence of modern technology is adequately sought in our questioning.
Heidegger’s use of language (or rather the way it is expressed in English translation) can be difficult at times. In the remaining few pages he discusses what enframing actually is, building upon the idea that as the essence of technology, it is therefore that which reveals the real through ordering as standing reserve. As discussed above, we humans are challenged forth (compelled) by enframing to reveal the real in a seemingly deterministic way (Heidegger refers to this as destining) that holds complete sway over us. However, technology is not our fate, we are not necessarily compelled along an unaltered and inevitable course because “enframing belongs within the destining of revealing” and destining is “an open space” where man can “listen and hear” to that which is revealed. Freedom is in “intimate kinship” with the revealed as “all revealing comes out of the open, goes into the open, and brings into the open… Freedom is the realm of the destining that at any given time starts a revealing upon its way.” Freedom then, is to be found in the essence of technology but we are continually caused to believe that the brink of possibility is that which is revealed in the ordering processes of modern technology to create the standing reserve, deriving all our standards from this basis. Freedom is continually blocked by this process of the destining of revealing which obscures the real. This is a danger.
It is a danger because when the real is concealed it may be misinterpreted. When something is unconcealed it no longer concerns us as an object but, rather, as standing reserve “and man in the midst of objectlessness is nothing but the orderer of the standing reserve”. When the object is lost to the standing reserve, we ourselves become standing reserve and see everything as our construct, seeing not objects everywhere but the illusion and delusion of encountering ourselves everywhere.
In truth, however, precisely nowhere does man today any longer encounter himself, i.e., his essence. Man stands so decisively in subservience to on the challenging-forth of enframing that he does not grasp enframing as a claim, that he fails to see himself as the one spoken to, and hence also fails in every way to hear in what respect he ek-sists, in terms of his essence, in a realm where he is addressed, so that he can never encounter only himself.
But enframing does not simply endanger man in his relationship to himself and to everything that is. As a destining, it banishes man into the kind of revealing that is an ordering. Where this ordering holds sway, it drives out every other possibility of revealing. Above all, enframing conceals that revealing which, in the sense of poiésis, lets what presences come forth into appearance.
Enframing blocks the truth and destining compels us to create order out of nature which we believe is the truth. This is the danger, not of technology, which itself cannot be dangerous, but rather of the destining of revealing itself. Enframing, the essence of technology then, is the danger.
The threat to man does not come in the first instance from the potentially lethal machines and apparatus of technology. The actual threat has already afflicted man in his essence. The rule of enframing threatens man with the possibility that it could be denied to him to enter into a more original revealing and hence to experience the call of a more primal truth.
Drawing on Holderlin, Heidegger believes that technology’s essence contains both the danger (enframing) and its saving power. How is this so? Enframing is not the essence of technology in the sense of a genus, “enframing is a way of revealing having the character of destining, namely, the way that challenges forth.” Recall that the revealing that “brings forth” (poiésis) is also a way with the character of destining. By contrast, enframing blocks poiésis.
Thus enframing, as a destining of revealing, is indeed the essence of technology, but never in the sense of genus and essentia. If we pay heed to this, something astounding strikes us: it is technology itself that makes the demand on us to think in another way what is usually understood by “essence.”
As we have seen, the essence of modern technology for Heidegger is enframing and as its essence, enframing is that which endures. Enframing is “a destining that gathers together into the revealing that challenges forth.” But Heidegger also states that “only what is granted endures” and “challenging is anything but a granting.” So how can the challenging of modern technology be resolved into that which is granted and endures? What is the saving power “that let’s man see and enter into the highest dignity of his essence”? The answer is to recall that enframing need not only challenge forth but can also bring forth the revealing of nature.” The essential unfolding of technology harbors in itself what we least suspect, the possible rise of the saving power.”
Heidegger argues that “everything depends” on our ability and willingness to cast a critical eye over “the essential unfolding” of technology. That instead of “gaping” at technology, we try to catch sight of what unfolds in technology. Instead of falling for the “irresistibility of ordering”, we opt for the “restraint of the saving power”, always aware of the danger of technology which threatens us with the possibility that its revealing, saving power might be “consumed in ordering and that everything will present itself only in the unconcealedness of standing reserve.”
So long as we represent technology as an instrument, we remain transfixed in the will to master it. We press on past the essence of technology… The essence of technology is ambiguous. Such ambiguity points to the mystery of all revealing, i.e., of truth.
Now at the end of his essay, we can see there are two possible direction one might take with technology:
On the one hand, enframing challenges forth into the frenziedness of ordering that blocks every view into the propriative event of revealing and so radically endangers the relation to the essence of truth.
On the other hand, enframing propriates for its part in the granting that lets man endure—as yet inexperienced, but perhaps more experienced in the future—that he may be. the one who is needed and used for the safekeeping of the essence of truth. Thus the rising of the saving power appears.
Heidegger concludes that technology once shared the root techné with a broader practice of poiésis. Technology (techné) brought forth and revealed that which was true and beautiful through the poetics of the fine arts. It is in the realm of the arts, therefore, that we can practice the questioning of technology in the hope of revealing the truth, which modern technology habitually conceals through the order it imposes on the world.
Because the essence of technology is nothing technological, essential reflection upon technology and decisive confrontation with it must happen in a realm that is, on the one hand, akin to the essence of technology and, on the other, fundamentally different from it.
Such a realm is art. But certainly only if reflection upon art, for its part, does not shut its eyes to the constellation of truth, concerning which we are questioning.
During the middle part of my twenties, I studied Buddhism at SOAS, University of London and then at the University of Michigan. A very common phrase found in English translations of Buddhist texts is ‘expedient means’ or ‘skilful means’, from the Sanskrit उपाय upāya. Although I’ve barely read any Buddhist texts for over ten years now, recently this phrase has been coming to mind quite often and it was only when I read the Wikipedia article for upāya that I realised the connection to my current work.
Upaya (Sanskrit: उपाय upāya, “Expedient Means” or “pedagogy”)  is a term in MahayanaBuddhism which comes from the word upa√i and refers to something which goes or brings you up to something (i.e., a goal). It is essentially the Buddhist term for dialectics. The term is often used with kaushalya (कौशल्य, “cleverness”); upaya-kaushalya means roughly “skill in means”. Upaya-kaushalya is a concept which emphasizes that practitioners may use their own specific methods or techniques in order to cease suffering and introduce others to the dharma. The implication is that even if a technique, view, etc., is not ultimately “true” in the highest sense, it may still be an expedient practice to perform or view to hold; i.e., it may bring the practitioner closer to true realization anyway.
I’d never thought of upāya as ‘pedagogy‘ but this translation does make sense to me. It is a technique, a practice, a method of instruction, that may further one’s understanding of something, or taught to further the understanding of others. It’s also interesting that the Wikipedia article refers to upāya as the term for Buddhist dialectics. I don’t think this is strictly true as there are examples of upāya in the literature which use parable or kōan for example, but it’s true that in the early scholarly (monastic) texts, negative dialectics is a preferred method of pursuing truth. Nāgārjuna’s Mādhyamaka school of Buddhist thought is a good example of this. The Gelug school of Tibetan Buddhism still maintains a tradition of negative dialectical debate in its monasteries.
Although, as I learned to both my interest and frustration, the interpretation of ancient religious and philosophical ideas is constantly open to debate, it is conceivable that upāya might be understood and expressed as a pedagogical praxis of negative dialectics, or more simply, a critical pedagogy. It’s also occurred to me that there might be some value in thinking about ‘expedient means’ in light of Holloway’s work on negativity, his No! and Marx’s ‘negation of negation‘.
The negation of the negation does not bring us back to a reconciliation, to a positive world, but takes us deeper into the world of negation, moves us onto a different theoretical plane.
There’s clearly a lot to clarify and flesh out but, broadly speaking, I think the idea of ‘expedient means’ is a useful way of organising and developing my thinking. In pedagogical terms, what is technology’s dialectic? How might it be used as an expedient or skilful means, which leads to a better understanding of ourselves and our predicament?
To reflect this new approach to my work, which has been emerging since the start of this year, I’ve changed the title of this blog. The previous ../learninglab/joss title no longer made much sense anyway, as the blog is no longer hosted on the Learning Lab.
I would like to say that Tiqqun is not an author, first of all. Tiqqun was a space for experimentation. It was an attempt at bridging the gap between theory and a number of practices and certain ways of “being together”. It was something that existed for a certain time and that then stopped because the people working at it weren’t happy with the relation between theory and practice and that certain people had decided that Tiqqun 3 would be a movie. ((See the interview with Agamben. http://www.dailymotion.com/video/x929gp A video of the Q&A which followed his talk has since been removed but an English transcript of both the talk and Q&A can be found here: http://anarchistwithoutcontent.wordpress.com/2010/04/18/tiqqun-apocrypha-repost/))
This space for experimentation amounted to to 450 pages over three years, producing several substantial texts such as Bloom Theory, Introduction to Civil War, and The Cybernetic Hypothesis. ((Semiotext(e) (MIT Press) has recently published Introduction to Civil War and How is it to be done? in a single volume. A growing number of translations can be found on the web. The best source for these in English is: http://tiqqunista.jottit.com/))
Published in Tiqqun 2, The Cybernetic Hypothesis is forty-three pages long (in the original journal) and divided into eleven sections. Each section begins with one or two quotes which are then critiqued in order to further our understanding of the hypothesis and develop the author’s response. The author(s) write in the first person singular. They quote from a range of sources but do not offer precise references.
What follows are my notes on the text. A much more extended version of my notes is available here. Neither of these are a review of the text, simply a summary of my reading of each section.
Section one provides historical references for the objectives of cybernetics and argues that as a political capitalist project it has supplanted liberalism as both a paradigm and technique of government that aims to dissolve human subjectivity into a rationalised and stable (i.e. inoffensive) totality through the automated capture of increasingly transparent flows of information and communication. The authors understand this subjugation of subjectivity as an offensive, anti-human act of war which must be counteracted.
Section two establishes cybernetics as the theoretical and technological outcome and continuation of a state of war, in which stability and control are its objectives. Developing with the emergence of post-war information and communication theory and corresponding innovation in computer software and hardware, intelligence is abstracted from the human population as generalised representations that are retained and communicated back to individuals in a commodified form. This feedback loop is understood as a ‘system’ and later as a naturalised ‘network’ which, drawing on the 19th century thermodynamic law of entropy, is at continual risk of degradation and must therefore be reinforced by the development of cybernetics itself.
Section three ends with a useful summary of its own:
The Internet simultaneously permits one to know consumer preferences and to condition them with advertising. On another level, all information regarding the behaviour of economic agents circulates in the form of headings managed by financial markets. Each actor in capitalist valorization is a real-time back-up of quasi-permanent feedback loops. On the real markets, as on the virtual markets, each transaction now gives rise to a circulation of information concerning the subjects and objects of the exchange that goes beyond simply fixing the price, which has become a secondary aspect. On the one hand, people have realized the importance of information as a factor in production distinct from labour and capital and playing a decisive role in “growth” in the form of knowledge, technical innovation, and distributed capacities. On the other, the sector specializing in the production of information has not ceased to increase in size. In light of its reciprocal reinforcement of these two tendencies, today’s capitalism should be called the information economy. Information has become wealth to be extracted and accumulated, transforming capitalism into a simple auxiliary of cybernetics. The relationship between capitalism and cybernetics has inverted over the course of the century: whereas after the 1929 crisis, PEOPLE built a system of information concerning economic activity in order to serve the needs of regulation – this was the objective of all planning – for the economy after the 1973 crisis, the social self-regulation process came to be based on the valorization of information.
Section four focuses on the role of information to both terrorise and control people. The sphere of circulation of commodities/information is increasingly seen as a source of profit and as this circulation accelerated with the development of mass transportation and communication, so the risk of disruption to the flow of commodities/information became more of a threat. In cybernetics, total transparency is seen as a means of control yet because the removal of risk is never absolutely possible, citizens are understood as both presenting a risk to the system and a means to regulate that risk through self-control. Control is therefore socialised and now defines the real-time information society. An awareness of risk brings with it an awareness of the vulnerability of a system that is dependent on an accelerated circulation/flow of information. Time/duration is a weakness and disruption to time is signalled as an opportunity to halt the flow and therefore the project of cybernetic capitalism.
Section five is a critique of socialism and the ecology movement proposing how these two movements have been subsumed by cybernetic capitalism. The popular forms of protest over the last 30 years have only strengthened the cybernetic objectives of social interdependence, transparency and management. This marked the second period of cybernetics which has sought to devolve the responsibility of regulation through surveillance through the affirmation of ‘citizenship’ and ‘democracy’.
Section six offers a critique of the Marxist response to cybernetic capitalism and finds it contaminated and complicit in its economism, humanism and totalising view of the world.
Section seven offers a brief critique of critical theory and finds it to be an ineffectual performance cloistered in the mythology of the Word and secretly fascinated by the cybernetic hypothesis. The section introduces insinuation as a mode of interference and tactic for overcoming the controlled circulation of communication. The author(s) indicate that the remaining sections of The Cybernetic Hypothesis are an attempt to undo the world that cybernetics constructs.
Section eight discusses panic, noise, invisibility and desire as categories of revolutionary force against the cybernetic framework. Panic is irrational behaviour that represents absolute risk to the system; noise is a distortion of behaviour in the system, neither desired behaviour nor the anticipated real behaviour. These invisible discrepancies are small variations (‘non-conforming acts’) that take place within the system and are amplified and intensified by desire. An individual acting alone has no influence, but their desire can produce an ecstatic politics which is made visible in a lifestyle which is, quite literally, attractive, with the potential to produce whole territories of revolt.
Section nine elaborates on invisibility as the preferred mode of diffuse guerilla action. A method of small selective strikes on the lines of communication followed by strategic withdrawal are preferred over large blows to institutions. Despite the distributed nature of the Internet, territorial interests have produced a conceivably vulnerable network reliant on a relatively small number of main trunks. Both individual spontaneity and the organisational abilities of institutions are valued but both should remain distant from cybernetic power and adopt a wandering course of unpredictability.
Section ten develops the author(s) tactics for countering cybernetic capitalism, through the application of slowness, disruptive rhythms, and the possibilities that arise from encounters with others. The cybernetic system is a politics of rhythm which thrives on speed for stability (as was discussed in section four) and a range of predictability. The guerilla strategy is therefore one of dissonant tempos, improvisation and ‘wobbly’ rhythmic action.
Section eleven is a final attempt to define the key categories of struggle against the domination of cybernetic capitalism. These can be summarily listed as slowness, invisibility, fog, haze, interference, encounters, zones of opacity, noise, panic, rhythm/reverberation/amplification/momentum and finally, autonomy. Combined, these constitute an offensive practice against the requirement and expectation of cybernetics for transparency/clarity, predictability, and speed in terms of the information communicated and regulation of its feedbacks. The author(s) do not reject the cybernetic system outright but rather see the possibility for autonomous zones of opacity from which the invisible revolt can reverberate outwards and lead to a collapse of the cybernetic hypothesis and the rise of communism.
Originally published in French in Tiqqun II (2001). http://www.archive.org/details/Tiqqun2 Translated into English 2010 http://cybernet.jottit.com/
Stuart Staniford is one of the sharpest bloggers I know of and I just wanted to point to something he posted a few days ago which examines the significance of your level of education and ability to use technology in relation to your income. He’s commenting on US data and projections, but it’s still of interest to the rest of us. Click the image to go to his original blog post.
Clearly, using technology always increases your value, but the more education and skills you have, the bigger a multiplier technology gives you. Thus information technology is a fundamental driver of income inequality.
Throughout the last few months of my research into the implications of an energy crisis on Higher Education, one of my main weaknesses was knowing where to start when considering the impact that a Peak Oil energy crisis would have on our economy and therefore on the economic input and output of the HE sector. When considering an energy crisis scenario in the context of Higher Education, it seems to me that we can broadly divide the impacts into 1) Economic; and 2) Infrastructural. By this, I mean that we should be asking ourselves questions that relate to how we operate in an economy with significantly declining GDP and how we operate under circumstances where our energy infrastructure itself declines (both transport and coal and gas dependent electricity are, in a sense, underwritten by oil production). ((There is also the problem in the UK that many of our power stations need to be decommissioned around 2016. See this and this.)) Simply put, what happens to universities when there is a lot less money in the economy and energy in the form of electricity and petrol is rationed in one way or another? This was my original question and, I think, still remains valid.
I think that Educational Technologists should be thinking hard about the second part of the question, which implies that the provision of educational technology will be disrupted for decades. What is HE’s educational provision under a scenario of disrupted ICT?
The first part of the question should be of wider interest to people working in the HE sector (in fact, people in all parts of society, but this is where we work so let’s concentrate on HE). There has been some useful research done on the possible economic impact of Peak Oil. It cannot be conclusive, but it does provide us with the basis for our scenario planning. I would recommend these two recent papers which examine the likely short-term (i.e. 20 yrs) economic and social consequences of Peak Oil.
In summary, Hirsch’s paper shows that we can work on the assumption that global GDP will decline at about the same rate as global oil production, which is anticipated to be around %2-5/yr. The minority of oil exporting countries will fare better than the greater number of oil importing countries. Friedrichs’ paper, based on an analysis of historical examples, suggests that this will result in North America resorting to greater military coercion until a crippled economy forces the administration into ‘coercive diplomacy’. Western Europe, reluctant to engage in ‘predatory militarism’, “could hardly avoid a transition to a more community-based lifestyle. Despite the present affluence of Western European societies (or precisely because of it), this would be extremely painful and last for several generations.”
These papers, and their references, provide a good starting point for modelling the economic and social impacts on all aspects of society, including the UK Higher Education sector: Less money, more re-localisation.
On a related note, here are a few graphs which nicely illustrate the correlation between oil, money and debt.What they suggest is that oil production closely correlates with GDP and that since oil production plateaued in 2005, debt has been the driver of GDP where oil has been lacking.
The debt information is pretty suggestive of what is going on, and that is, the reason the world has been able to keep increasing GDP since 2005 is because it has been borrowing from the future to fund the addiction to economic growth. But this situation cannot continue without serious problems in terms of repayment. And we have imminent peak oil, with the consensus dawning that soon after 2011 oil supply is highly likely to start declining with decline rates anywhere between 2% and 8% per annum. ((I am Perplexed: Comments on the World Financial Situation and Peak Oil))
Richard Hall, who I collaborate with, has just posted our submission to the Open Education 2010 conference. It has been accepted and will be the last of a few ‘resilient education’ presentations and workshops that we are running over the summer. Hopefully, by the end of this process, we’ll have a decent idea about what people working in Higher Education think to the scenarios we are proposing and the challenges and opportunities that arise.
Here are the slides from a workshop we did at De Montfort University yesterday. We’ll be running a similar workshop at the HEA Conference next week and at the ALT Conference in September. The OpenEd10 abstract follows these slides.
HE faces complex disruptions. Can open education and social media enable individuals-in-communities to develop resilience and overcome dislocation?
Higher Education faces complex disruptions, from the growing threat of peak oil (The Oil Drum, 2010) and the impact that will have on our ability to consume/produce (Natural Environment Research Council, 2009), and from our need to own the carbon and energy we emit/use, in order to combat climate change. These problems are being amplified by energy availability and costs (The Guardian, 2009), public sector debt and the effects of a zero growth economy (new economics foundation, 2010).
One focus for response is the use of technology and its impact upon approaches to open education, in developing resilience. The Horizon Report 2010 (New Media Consortium, 2010) highlights the importance of openness but argues that learning and teaching practices need to be seen in light of civic engagement and complexity. Facer and Sandford (2010) ask critical questions of inevitable and universal futures, focused upon always-on technology, and participative, inclusive, democratic change. There is an ethical imperative to discuss the impacts of our use of technology on our wider communities and environment, and to define possible solutions.
Educational technology might be used to address some of these issues through the development of shared, humane values that are amplified by specific qualities of open education, including: relationships and power; anxiety and hope; and social enterprise and community-up provision. These areas are impacted by resilience, which is socially- and environmentally-situated, and denotes the ability of individuals and communities to learn and adapt, to mitigate risks, prepare for solutions to problems, respond to risks that are realised, and to recover from dislocations (Hopkins, 2009). This focuses upon defining problems and framing solutions contextually, around our abilities to develop adaptability to work virally and in ways that are open source and self-reliant. This means working at appropriate scale to take civil action, through diversity, modularity and feedback within communities.
The key for any debate on resilience linked to open education is in defining a curriculum that requires institutions to become less managerial and more open to the formation of devolved social enterprises. This demands the encouragement of what Gramsci (1971) called organic intellectuals, who can emerge from within communities to lead action. Learners and tutors may emerge as such organic intellectuals, working openly with communities in light of disruption. An important element here is what Davis (2007) terms “democratic ‘co-governance’” within civil action, but which might usefully be applied to education, in the form of co-governance of educational outputs. One key issue is how open education is (re)claimed by users and communities within specific contexts and curricula, in-line with personal integration and enquiry, within an uncertain world (Futurelab, 2009).
The following questions emerge, catalysed by open education.
1. What sorts of literacies of resilience do people as social agents need?
2. What sorts of knowledge/understanding do these learners need to be effective agents in society?
3. Are our extant modes of designing and delivering curricula meaningful or relevant?
This paper will address these questions by examining whether open education can enable individuals-in-communities to recover from dislocations.
Davis, J. 2007. The Limits of Partnership: An Exit-Action Strategy for Local Democratic Inclusion. Political Studies, 55(4): 779-900.
Facer, K. and Sandford. R. 2010. The next 25 years?: future scenarios and future directions for education and technology. Journal of Computer Assisted Learning 26, no.1: 74–93.
FutureLab. 2009. Enquiring Minds: Year 4 report: Innovative approaches to curriculum reform. Futurelab report.
Owen, Inderwildi and King’s recent paper, The status of conventional world oil reserves—Hype or cause for concern?, supports previous studies by others which show that conventional oil production peaked in 2005 and that the peak production capacity of all liquids (excluding gas), will peak around 2010. Conventional oil supply is declining by over 4%/annum, with the shortfall and anticipated additional demand being met by non-conventional oil (deep sea, tar sands) and other liquid sources such as gas and bio-fuels. In just ten years, 50% of the global demand for liquid fuel will have to be met by sources that are not in production today. Not surprisingly then, speculation over the price of oil based on fundamental supply and demand factors, as well as global events such as the invasion of Iraq, Hurricane Katrina and Israel threatening to attack Iran, has resulted in increased volatility of prices, reaching a high of $147 a barrel in July 2008. In 2000, a barrel of oil cost around $20. A decade later, the price of oil is around $80 a barrel and the trend remains upwards.
Different economic theories applied to the supply and demand of oil offer opposite outcomes. One suggests that the law of diminishing returns would create an incentive to invest further in unconventional sources such as tar sands and deep sea resources. On the other hand, some economists argue that due to the inextricable link between oil and economic activity, high oil prices can’t be sustained and that a price of $100 a barrel would induce global recession, driving down oil prices and paradoxically reducing investment in alternative fuels. While rising oil prices can damage economic growth, lowering oil prices does not have the same, proportionate effect on stimulating growth. It has been estimated that oil price-GDP elasticity is -0.055 (+/- 0.005) meaning that a 10% rise in oil prices leads to a 0.55% loss in global GDP.
Owen, Inderwildi and King’s paper concludes that published world oil reserve estimates are inaccurate and should be revised downwards by a third. Over-reporting since the 1980s due to the ‘fight for quotas’ whereby OPEC agreed to set export quotas in proportion to reserve volumes, and the inclusion of tar-sands into reserve estimates since 2004 have distorted reality. They add that “supply and demand is likely to diverge between 2010 and 2015, unless demand falls in parallel with supply constrained induced recession” and that “the capacity to meet liquid fuel demand is contingent upon the rapid and immediate diversification of the liquid fuel mix, the transition to alternative energy carriers where appropriate, and demand side measures such as behavioural change and adaptation.”
In their report, The Oil Crunch. A wake-up call for the UK economy, the Industry Taskforce on Peak Oil and Energy Security (ITPOES) likens the effect of an imminent ‘oil crunch’ due mid-decade, with the current ‘credit crunch’. The report identifies a slow down in the production of oil prior to 2013, when it then begins to drop and without replacement infrastructure already in place, little can be done to address this fully within the next five years. The authors note that the discovery rate of oil is inadequate to meet projected demand and they too, have concerns over published OPEC quotas.
It is worth noting the difference between conventional and unconventional oil, such as deep sea oil and tar sands. The ITPOES report note how the unconventional sources of oil are much more energy intensive to exploit and therefore more expensive to supply oil from. Whereas OPEC might extract a barrel of oil for around $20, deep sea oil might cost around $70 and a barrel of oil from tar sands costs around $90 to extract. Therefore, as conventional sources decline at more than 4% per year for conventional sources, the replacement non-conventional oil pushes the price upwards if demand is to be met.
Global oil demand is forecast to rise, although much of this increased demand will come from developing countries. An extrapolation of historical demand would suggest that 120Mb/day will be required by 2050, compared to our current use of around 84.5Mb/day. However, a historical extrapolation of largely OECD demand could be deceptive, given that five out of a global population of six billion people live in non-OECD countries like China and India, where demand for oil is growing strongest. OECD demand for oil has flattened out in recent years and may stay that way, yet globalisation has created new markets which are emerging in an environment of relatively high oil prices and may be more immune to price rises than OECD economies. Where OECD countries may tip into recession when oil hits $100/barrel, China’s economy and therefore demand for oil, may continue to grow.
However, it is one thing to extrapolate continued economic growth and a corresponding demand for oil and another to supply that demand. Like Owen, Inderwildi and King, the ITPOES report argues that global oil capacity will peak later this year at around 91-92Mb/d and continue at that level until 2015 at which time depletion will overtake capacity growth. Just as the production of conventional oil has plateaued since 2005, the production of all liquids will plateau from 2010/11 for about five years before entering terminal decline. This would have happened two years earlier were it not for the global recession temporarily reducing demand.
Similarly, the ITPOES report highlights the link between high oil prices and recessions, drawing on recent work that shows how every US recession since 1960 has been preceded by rapid oil price rises and that when the price of oil exceeds 4% of US GNP, a recession occurs shortly afterwards. Although this correlation is not necessarily causation, it is “highly suggestive” and the report notes that 4% at current GNP is around $80/barrel, roughly the price of oil at the time of writing. OPEC have publicly stated that they prefer a price around $75/barrel.
The relationship between oil prices and the economy is referred to repeatedly in the ITPOES report. OECD countries are heading some way towards a partial decoupling of economic growth from the consumption of oil, although not to an extent that assures overall energy security. By 2013, OECD and non-OECD countries are likely to each take half of the total global supply of oil. The recession since 2008 has had some impact on global oil consumption, but this is recovering quickly and demand is on the rise. By 2014, prices are expected to be volatile, between $120-$150/barrel resulting in “recessionary forces” which produce a repeat of the 2008 recession. Consequently, oil prices are then expected to drop to somewhere between $90-$120/barrel, picking up again as the global economy recovers. And so on…
For the UK, the ITPOES report highlights the vulnerability of the transport sector and the knock-on effects that higher petrol prices are likely to have on our ‘just-in-time’ business models, recalling the effects of the fuel protests in 2000. While the domestic, industrial and service sectors of the UK economy are able to reduce their reliance on oil products, the transport sector continues to use more, with road and air transport using more than 50% of total UK consumption in 2008.
Increases in the cost of oil are felt in other sectors due to the vulnerability of the transport sector. It raises the cost of capital and puts off investment. With transportation being integral to the supply chain, the price rise is felt through higher consumables, significantly, food. The agriculture sector is highly dependent on oil for transportation fuel and as a component in fertilisers and insecticides. Due to increasing overall energy prices, the number of households trapped in fuel poverty is expected to continue to rise. Their business-as-usual scenario is expected to see the UK become an overwhelmingly major energy importer, with a devalued currency that offsets economic growth.
A more explicitly critical report has been written by the NGO, Global Witness. Focusing on the ‘four fundamentals’ of oil field depletion, declining discovery rates, insufficient new projects and increasing demand, Heads in the Sand specifically targets governments and their agencies for inaction, being “asleep at the wheel” and demonstrating a lack of appreciation of the imminence and scale of the problem a global oil crunch will bring. The report is especially critical of the International Energy Agency (IEA), upon which governments rely for the reporting of data and forecasts.
Heads in the Sand highlights the geopolitical and social consequences of peak oil, linking the energy crisis to the climate crisis in terms of how short-term, national economic interests are overtaking the need to shift to more sustainable energy supply systems. The consequences of this are disastrous affecting almost every aspect of life, including “food security, increased geopolitical tension, increased corruption and threats to the nascent global governance reform agenda, and the potential for major international conflict over resources.” As with the effects of climate change, the poor are vulnerable to oil price volatility and restricted supply, as was seen with the food price rises in 2008, which the World Food Programme described as a “…a silent tsunami threatening to plunge more than 100 million people on every continent into hunger.” Such volatility has geopolitical implications that are difficult to predict but are likely to be in the form of increased tension between states, rioting and protests around the world and human rights abuses perpetrated by kleptocratic governments. A scenario of collapse within one or two decades, with the decline of globalisation and increased environmental destruction, is suggested.
Like the other reports summarised above, Global Witness describe the strong links between oil use and economic growth and the consequent reversal of growth with a declining supply of oil. With each price shock comes “a vast deployment of national wealth by consuming economies, expenditure that would have been better used in the creation of an alternative and sustainable energy system.” The Heads in the Sand report provides a useful overview of key illustrated facts and figures relating to peak oil aimed at the general reader and is international in outlook, rather than concerned specifically with industry analysis or the impact on the UK economy. It also examines the alternatives to conventional oil production and their feasibility. Enhanced Oil Recovery (EOR), exploitation of Canada’s tar sands, oil shales, heavy oil and natural gas liquids all come with caveats that make them very unlikely to mitigate a decline in global oil production this decade and often come with serious environmental impacts.
In particular, is the problem of shrinking Energy Return On Investment (EROI), which refers to the ratio of energy input required to produce each unit of energy output. The Canadian tar sands, for example, have a net energy of around 10:1 compared with conventional oil of around 30:1. Therefore, the estimated production of 5.9m barrels/day from the tar sands by 2030 is actually worth just 1.6m barrels/day of conventional oil output. Increasingly, more energy that is produced is being diverted back into the production of raw energy.
Finally, Global Witness discuss the wasted decade when governments could have acted to mitigate the effects of peak oil, but chose to rely on the overly optimistic projections, claiming that the IEA misrepresented projected discovery data and was overconfident in its forecasts of future oil production. Clearly business as usual is no longer an option and radical measures are required by governments to address the scale and imminence of peak oil and its impacts.
The Royal Academy of Engineering’s report, Generating the Future. A report on UK energy systems fit for 2050, warns of the magnitude of the task of reducing emissions by 80%, examining four possible energy scenarios that could meet that target. The report discusses the engineering challenges, emphasising how dependent we currently are on fossil fuels for the vast majority of our energy. A reduction in emissions of 80% would change the chart below, “beyond all recognition.” Carbon Capture and Storage (CCS) of coal, “if successful”, would allow for greater use of coal, but “major changes” would still be required.
The report makes the point that most of the technologies required are already available, but the period of transition from R&D to 90% market penetration is normally in the region of 30-40 years. Not only is the transition to the use of new technologies a major challenge, but the building of new infrastructure is also a “huge challenge.” Reference is made to how the country has met comparable challenges when “on a war footing” when a whole national manufacturing base shifts focus, but clearly the point being made is that the task is not just technological, nor economic, but political and social. Where is the policy that confronts the threat of this ‘war’?
At present there is generally insufficient incentive to make the switch to a new low-carbon technology, particularly when such a switch would be costly and disruptive.
As Engineers, the report highlights the scale of work that is required, listing examples such as the building of three miles of wave power machines each month for the next 40 years, the importing of huge numbers of wind and wave turbines, the building of port facilities to handle the scale of off-shore wind turbine construction (similar on scale to that required for the North Sea oil and gas development), a network of pipes to carry carbon captured from coal stations, again equivalent to the infrastructure developed for the North Sea Oil and gas industry. Their scenarios all necessitate a major upgrade to the electricity grid, not seen since the 1970s, requiring billions of pounds of investment, too. The electrification of transport and improvements to the efficiency of buildings, require “major systemic changes” to millions of individual assets. To build and maintain this, major training programmes are required to develop the skills needed for this to succeed. With new technologies, new academic disciplines will be needed, too. All of this will take place within an increasingly competitive global environment as other countries work towards decarbonisation and requires our industry to remain agile enough so as to avoid lock-in to new technologies that have short-term benefit but become obsolete over these crucial decades. “In summary, the changes to the UK energy system required to meet any of the scenarios will be considerable and disruptive.”
The report is largely concerned with the balance of energy and its flow and does not address issues of energy security. It shows how the Climate Change Act requires a change from this:
The first scenario sets the demand level to be the same as current levels. It should, however, be stressed that this by no means represents business as usual. Simply keeping demand at a similar level to now will require considerable effort. … In technological terms there are no choices to be made – the demand is so large that every available technology will be needed as quickly as possible. The main problems for scenario 1 will be buildability and cost to the nation. With over 80 new nuclear or CCS power plants required – around two per year – along with vast increases in all forms of renewables, building the system would require an enormous effort, probably only achievable by monopolising most of the national wealth and resources.
Rather than outlining the other three scenarios, which assume reductions and changes in demand through efficiencies and alternative uses of technology as well as a substantial use of low-grade heat, it is worth noting the statement above that keeping demand as it is, is a challenge in itself. Since 1980, with the exception of industry, ((Energy demand from industry has decreased -34% presumably due to the de-industrialisation of our economy)) the UK’s final energy demand has risen +68% for transport, +10% for the domestic sector and +3% for the service sector. ((DECC, UK Energy in Brief 2008)) There is a limit to how much more we can de-industrialise so the kinds of demand reductions required for scenarios 2, 3 & 4, assume reductions in sectors that have never previously shown reductions. All scenarios suffer from increased reliance on intermittent sources of energy supply and it is suggested that until we have “adjusted” to this new system, fossil fuels are used as backup sources until 2050. There is an assumption that new, unknown technologies will improve the resilience of supply beyond 2050.
The report concludes that while each of the four scenarios is only meant to be illustrative and not predictive, they do show there is no single ‘silver bullet’ that will achieve the cuts in emissions that are required. The report is useful in the way it discusses energy flows in a national system of supply and demand. Key to each scenario is that electricity generated from a variety of renewable and low carbon sources, will have to become the major source of power, providing energy to around 80% of our transportation. Energy demand must be reduced in all scenarios, even the first scenario which assumes no further increase in demand, represents a reduction on current forecasts. The need for behavioural change is briefly mentioned, but not elaborated on.
The urgency of the task is highlighted in terms of both its scale and the time required to meet the 2050 target and because a re-engineering of the UK’s energy infrastructure is necessarily measured in decades, with technologies themselves expected to be in place for several decades, too, it is the current crop of low carbon technologies that will have to make the significant contribution to the 2050 targets. Future technologies belong to the future and will make little contribution to the work required over the next 40 years. Quite bluntly, the report states: “There is no more time left for further consultations or detailed optimisation. Equally, there is no time left to wait for new technical developments or innovation. We have to commit to new plant and supporting infrastructure now.” The report calls for strong direction from government, admitting that the scale of the challenge is
currently beyond the capacity of the energy industry to deliver. In order to achieve the scale of change needed, industry will require strong direction from government. Current market forces and fiscal incentives will not be adequate to deliver the shareholder value in the short-term and to guarantee the scale of investment necessary in this timescale.
Finally, the report is critical of the current policy situation, stating that
current government structures, including market regulation, are, as yet, simply not adequate for the task.” Government must be re-organised around the challenge (i.e. be on a ‘war footing’), in order to provide “the clear and stable long-term framework for business and the public that is not currently in evidence. It also needs to be recognised that the significant changes required to the UK energy system to meet the emissions reduction targets will inevitably, involve significant rises in energy costs to end users. ((I should also add that a similar report by the Institute of Mechanical Engineers complements the recommendations of this report, too. However, the report places a greater emphasis on the need for methods of adaptation and not just mitigation.))
Pielke Jnr.’s paper, The British Climate Change Act: a critical evaluation and proposed alternative approach, is a short and revealing paper which argues that the magnitude of the task set out by the UK Climate Change Act will inevitably lead to its failure as a piece of legislation and that the sooner this is recognised, the better chance there is of creating policy which drives realistic outcomes. The paper is not an examination of technologies, but rather a calculation of and reflection on the UK’s past rates of decarbonisation and a comparison with other countries’ demonstrable rates of decarbonisation.
Methodologically, Pielke Jnr. examines two “primary factors” that lead to emissions: economic growth (or contraction) i.e. GDP, and changes in technology, typically represented as carbon dioxide emissions per unit of GDP.
Each of these two primary factors is typically broken down into a further two sub-factors. GDP growth (or contraction) is comprised of changes in population and in per capita GDP. Carbon dioxide emissions per unit GDP is represented by the product of energy intensity, which refers to energy per unit of GDP and carbon intensity, which refers to the amount of carbon per unit of energy.
The logic of these relationships means that “carbon accumulating in the atmosphere can be reduced only by reducing (a) population, (b) per capita GDP, or (c) carbon intensity of the economy.” Pielke is concerned with the creation of policy that will achieve its intended effect and notes that population reduction and/or a reduction in per capita GDP are not realistic strategies for governments to promote policy. Therefore, a reduction in the carbon intensity of the economy (decarbonisation), is the only realistic policy choice.
The paper approaches the challenge of de-carbonisation in two ways: a ‘bottom up’ approach, which looks at the projected increase in UK population as well as projections of per capita GDP, from which an implied rate of decarbonisation can be estimated. The other, ‘top-down’ approach, examines overall GDP growth and derives the implied rates of decarbonisation needed to meet the specified target. Through a series of straightforward calculations, Pielke’s bottom-up analysis shows that
the combined effects of population and per capita economic growth imply that to meet the 2022 and 2050 emissions targets increasing energy efficiency and reduced carbon intensity of energy would have to occur at an average annual rate of 5.4%–2050 and 4.0%– 2022. These numbers also imply that successfully meeting the 2022 target with a 4.0% annual rate of decarbonization would necessitate a rate higher than 5.4% from 2022 to 2050.
The top-down analysis begins with an assumption about future economic growth, integrating future population growth and future per capita economic growth, then works backwards to determine the rate of de-carbonisation required to meet the future emissions target. This approach underlines the fact that higher rates of GDP growth likewise require higher rates of decarbonisation. It is worth reading the paper for a full understanding of the figures alone, but in summary, this analysis shows that the rates of de-carbonisation required are “4.4% per year for the 2022 target and 5.5% for the 2050 target. These numbers are substantially higher than the rates of decarbonization observed from 1980 to 2006 and 2001 to 2006.” By comparison, between 1980 and 2006, the actual rate of de-carbonisation in the UK was 1.9%, decreasing to 1.3% during the period 2001-6.
In response to this, Julia King, Vice Chancellor of Aston University and member of the Climate Change Committee, replied that technically, these rates of de-carbonisation are “do-able”. However,
I think you really do need to take due account of the fact that most people who are putting together targets and timetables are doing this on the basis of a lot of research into potential scenarios. It is another issue turning that into policy, for governments, and it is very easy for all of us who do not have to be elected to say ‘this is how I would do it’, and I have a lot of sympathy for our politicians, because they are dealing with extremely selfish populations.
The latter part of Pielke’s paper compares his analysis with the actual rates of decarbonisation in the UK and other countries. France provides a good example of the magnitude of the decarbonisation challenge as it is the major, developed economy with the lowest rates of emissions (0.30 t of carbon dioxide per $1000 of GDP in 2006). France has achieved this due to its reliance on nuclear power for electricity generation and was able to decarbonise overall by about 2.5% during the period 1980-2006. Notably, however, it only achieved a rate of 1.0% during 1990 to 2006.
It took France about 20 years to decarbonize from 0.42 t of carbon dioxide per $1000 GDP, the level of the UK in 2006, to 0.30 t of carbon dioxide per $1000 GDP. France’s decarbonization experience thus provides a useful analogue. For the UK to be on pace to achieve the targets for emissions reductions implied by the Climate Change Act its economy would have to become as carbon efficient as France by no later than 2016 … In practical terms this could be achieved, for example, with about 30 new nuclear plants to be built and in operation by 2015, displacing coal and gas fired electrical generation.
Pielke concludes by arguing that the approach to the Climate Change Act has been backwards, setting a target without being clear on how it will be achieved. In terms of policy success, he also points out the danger of confusing a reduction in emissions with decarbonisation, stating that a lowering of emissions, due to the recession for example, does little to change the role of energy technology in the economy. Without changes in energy technology, emissions remain tightly coupled with GDP and population growth, areas which the Climate Change Act is not attempting to reduce. The success of this policy will be on reducing emissions under the forecast conditions of economic and population growth. Thus, carbon dioxide emissions per unit of GDP (i.e. decarbonisation) is the key measurement by which to judge the policy. The UK has achieved significant rates of decarbonisation in the past (better than other countries but not as high as the Act implies), but, as pointed out above, this was due to the de-industrialisation of the 1980s and 1990s, and this rate has since slowed considerably.
Given the magnitude of the challenge and the pace of action, it would not be too strong a conclusion to suggest that the Climate Change Act has failed even before it has gotten started. The Climate Change Act does have a provision for the relevant minister to amend the targets and timetable, but only for certain conditions. Failure to meet the targets is not among those conditions. It seems likely that the Climate Change Act will have to be revisited by Parliament or simply ignored by policy makers. Achievement of its targets does not appear to be a realistic option… Because no one knows how fast a large economy can decarbonize, any policy (or policies) focused on decarbonization will have to proceed incrementally, with constant adjustment based on the proven ability to accelerate decarbonization (cf Anderson et al 2008). Setting targets and timetables for emissions reductions absent knowledge of the ability to decarbonize is thus just political fiction.
Finally, Pielke proposes alternative methods of accelerating decarbonisation. These would involve international co-operation in assisting other countries to decarbonise to at least the level currently observed in the UK and focussing on sector specific policy that addressed the processes of decarbonisation but without the impossible targets and timetables, the expansion of low/no carbon energy supplies and incremental improvements rather than long-term measures. It is a useful paper, both for its analysis and its approach, one which clearly recognises that economic growth will remain the first priority for the UK and other countries.
What the graph above shows is that current global emissions are worse than any of the ‘marker scenarios’ outlined in the 2007 IPCC assessment report on Climate Change. Rather than repeat it here, Stuart Staniford provides an excellent analysis of what this implies. To add to this, I would simply suggest that by the time of the next IPCC report in 2014, it seems likely to me that if the Peak Oil and Climate Change summaries I’ve given above are on the money, the strategy, policy and methods of public engagement to address the related predicaments of both energy and climate change will require a significant re-think in the middle of this decade, just before the next UK general election.
Some significant questions remain, too:
If economic growth is coupled to the supply of energy, how will Peak Oil affect global GDP? The Climate Change Act, Peilke’s analysis and the IPCC scenarios, assume continued economic growth yet how will a decline in the production of oil (and by implication a decline in GDP) affect both the production and supply of other fossil fuels and our ability to fund the required transition to low-carbon economies? Will we have to wait until the subsequent (post 2014) IPCC report before they take Peak Oil and its economic consequences into account? I guess so.
Viewing Message: 1 of 1. Notice