Archive for the ‘AI’ Category

Towards 2048, Erkki Kurenniemi*

Erkki Kurenniemi (1941-2017),  a pioneer of electronic music, technological inventions and imaginaries, has passed away. He can be characterised by lists that catalogue his interests and skills, lists that are eclectic, fragmented but also about enthusiastic curiosity: a filmmaker, creative music technologist, roboticist, tinkerer as well as “perennial dissident” (as Erkki Huhtamo quipped). He was of the generation who used computers before they became personal; computers were odd experimental machines found at university departments and sometimes banks. Perhaps part of the trick was also to remind oneself of computers as something you could build yourself, and use for all sorts of wrong purposes.

The Finnish technologist recorded his life with the meticulous intent of an admistrative worker, a scribe – notebooks, audio recordings, video clips, collections of ephemera. A large part of Kurenniemi’s life was a sort of durational performance art piece that aimed to gather bits of human life, memorabilia, towards the point (around 2048) at which computational capacities are efficient enough to model and simulate human life.

He also took the liberty to write his own premature obituary: “Oh, Human Fart” some 13 years before his actual death. The text starts like this:

“I was five when the ENIAC electronic computer was started. During the fifties, as a schoolboy, I read about computers and electronic music. Max Mathews used the computer to generate music. With my father, I visited the Bull computer factory in Fance and I was sold.”

In many ways, Kurenniemi himself wrote what others should write and think about him. After that self-defined ur-scene, many devices, sounds and ideas followed like the Dimi-series of synthetizers where in some cases also touch and movement became sound.

Kurenniemi’s way of narrating his own life emphasises the role of technology, which is not a surprise. One can say that he is a symptom of the particular period, the post-World War II and Cold War age of computer technologies, experimental technological arts, as well as the  discourse of cyborgs and the technological singularity.

But all this was also situated in the more mundane the entry of technologies in institutions: university departments, companies, socio-technical infrastructures from telephony to gaming to automation of factory production.

Kurenniemi’s work became a reference point that was rethought, reinvented, remixed, and re-performed in various contexts in electronic music. It was not merely to be replayed but acted as a reference point and as a resource for experiments. Florian Hecker discussed Kurenniemi’s work also as part of the wider culture of experiments in sound from Cage to Xenakis: “Kurenniemi showed progression from one register to the next, the period of his musical instruments was followed by a study of tuning systems and theoretical conceptions on neural networks; it’s essential to do something else with all that material, rather than a mere scholarly reactivation or reorganization.”

Cue in, Pan Sonic with Kurenniemi.

From sounds and performance to contemporary art, Kurenniemi’s work has been featured in various exhibitions, including at dOCUMENTA (13), Kunsthall Aarhus and Kiasma in Helsinki. Curators such as Joasia Krysa have been especially active in articulating Kurenniemi’s work in the context of contemporary technological arts. His archival project can be perceived as an archival fever that was partly triggered as part of digital culture. However for Kurenniemi, this was always in the context of imagining the coming AI future but without the fallacy that this machine intelligence would be humanlike. Why would it want to model and imitate something so “slow, imprecise, forgetful, and easily fatigued” (Kurenniemi’s words)? Kurenniemi’s vision of the future was based on I.B.N.: info, bio, nano, the three defining scales of social change.  The future did not match particularly well with the human form or size. Oh human fart.

Reading Kurenniemi’s life, try approaching it as rewind and fast forward. Time-axis manipulation: backwards, he is part of a cultural history of computing, of early computer experiments with visual arts such as computer animations; and then the other way, he is also a forward-dreaming, sometimes hallucinating, writer of the imaginary of a technological next step that takes a singular turn.

A switch.

Electronics are the backbone of this imaginary, both as visuals and as sounds, but despite his seemingly at times focused vision of the coming quantum computer future, perhaps it was never  exactly sure even to him as to what was to come: perhaps these ideas, snippets, machines, were all little probes into what is possible? Of course, he was convinced that certain technological advances will happen but perhaps as interesting as the wild imaginaries were the ways in which he worked closely with machines throughout his life, as one sort of a companion to his own meat-based existence.


It was not merely about knowing what’s coming but experimenting about how to know what’s to come and educating that sort of a way of thinking to others too. There’s of course a strong hint of the particular optimism that characterised the spectacles of technology in the 1950s and 1960s in the US and Europe as well. The Eames Office was offering its own version of the visual communication in the age of information and many other institutions from MIT to AT&T, EAT, etc. participated in the new institutional entry of technological arts as part of world fairs and other events. The avant-garde was – and has since been – closer to the corporations of technology so that it became perceived as a natural step Silicon Valley took over the role of offering imaginaries of technological future. But sometimes instead of elon musks, it’s more interesting to read the erkki kurenniemis and their much earlier visions that are not solely a corporate fantasy brand line or a TED talk. Sometimes it is more interesting to look at what was going on in the seeming peripheries, like the Nordic countries, to get a sense of a slightly alternative way to understand this story, rewinding and fast-forward.

After Kurenniemi’s death, what’s left is a collection of his recordings and other materials, housed at the Central Art Archives of the Finnish National Gallery (and thanks to a lot of work by Perttu Rastas and others). It is a mixed collection of technological dreaming that at times seemed more interesting when it was not focused on trying to invent a new thing but just speculating, like this one sound recording of Kurenniemi’s. This is where the technological imaginary does not follow a straight geometric line, but goes off on a tangent and towards escape velocity.

“(00:00:00) (Click click, radio signal, blows in the microphone five times, click, blow)
One, two, three, puppadadud. Fuck, fuck, fuck, this is sensitive. There we go.
(blow) Yeah, a dreaming computer… will be the last human invention. Well not the last one, but… the last invention. Because a dreaming computer will already have dreamt up
everything. Prior unconscious. Well, no. Dead computers may only be in two spaces: in an idle loop waiting to be interrupted or in a conscious space receiving and handling external information, printing it. A sleeping computer is not in an idle loop. Yeah, well of course it is, it does ask questions and wakes up when needed but otherwise it dreams. It is organizing its files, optimizing, associating, organizing, thinking, planning. And only when called upon, it interrupts its sleep for a little while to answer a question.

(The sound of the microphone being touched, cut) (Kurenniemi C4008-1 1/11)”


More on Kurenniemi and texts by Kurenniemi in English:
Erkki Huhtamo’s Preface “Fragments as Monument” can be read online.
Mika Taanila’s film about Kurenniemi: The Future Is Not What It Used To Be.
The Wire wrote a short obituary about Kurenniemi.
* Note also that “Towards 2048” is the title of the earlier Kiasma exhibition on Erkki Kurenniemi.

Autonomous AI as Weapons, Policy and Economy

August 11, 2015 Leave a comment

WIth my colleague Ryan Bishop we did some popular writing over the summer and responded to the recent call to ban autonomous weapons systems. The open letter was widely discussed but usually with the same emphases, so we wanted to add our own flavour to the debate. What if they are already here? What if the media archaeology of autonomous weapons goes way back to the experimental weapons development started during the Cold War?

Here’s our short piece in The Conversation. It was rather heavily edited so I took the liberty to paste below the longer original version (not copyedited though).


Ryan Bishop and Jussi Parikka, Winchester School of Art/University of Southampton
Autonomous AI as Weapons, Policy and Economy

A significant cadre of scholars and corporate representatives recently signed an open letter to “ban on offensive autonomous weapons systems.”  The letter was widely publicised and supported by well-known figures from Stephen Hawking to Noam Chomsky, corporate influentials like Elon Musk, Google’s leading AI researcher Demis Hassabis and Apple co-founder Steve Wozniak. The letter received much attention in the news and social media with references to killer AI robots and mentions of The Terminator, adding a science-fictional flavour. But the core of the letter referred to an actual issue having to do with the possibilities of autonomous weapons becoming a wide-spread tool in larger conflicts and in various tasks “such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”

One can quibble little with the consciences on display here even if scholars such as Benjamin Bratton already earlier argued that we need to be aware of much wider questions about design and synthetic intelligence. Such issues cannot be reduced to the Terminator-imaginary and narcissistically assume that AI is out there to get us.  Scholars should anyway address the much longer backstory to autonomous weapons systems that make the issue as political as it is technological.

The letter concludes with the semi-Apocalyptic and not altogether inaccurate assertion that “The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.” However this not the endpoint but rather it is the starting point.

Unfortunately the AI global arms race has already started. The most worrying dimension of this AI arms race is that it does not always look like one. The division between defense and offensive weapons was already blurred during the Cold War.

The doctrine for pre-emptive strike laid waste to the difference between the two. The agile capacity to reprogram autonomous systems means all systems can be altered with relative ease, and the offensive/defensive distinction disappears even more fully.

The new weapons systems can look like the Planetary Skin Institute or the Central Nervous System for the Earth (by Hewlett-Packard), two of the many autonomous remote sensing systems that allow for automated real-time responses to the conditions they are meant to track. And to act on that information. Automatically.

In the present, platforms for planetary computing operate with and through remote sensing systems that gather together real-time data and of the earth for specific stakeholders through models and simulations. A system such as the Planetary Skin Institute, initiated by NASA and Cisco Systems, operates under the aegis of providing a multi-constituent platform for planetary eco-surveillance. It was originally designed to offer a real-time open network of simulated global ecological concerns, especially treaty verification, weather crises, carbon stocks and flows, risk identification and scenario planning and modeling for academic, corporate and government actors (thus replicating the US post World War II infrastructural strategy). It is within this context of autonomous remote sensing systems that AI weaponry must be understood; the hardware and software, as well as overall design and implementation, are the same for each. Similarly provenance for all of these resides primarily in Cold War systems designs and goals.

The Planetary Skin institute now operates as an independent non-profit global R & D organization with its stated goal of being dedicated to “improving the lives of millions of people by developing risk and resource management decision services to address the growing challenges of resource scarcity, the land-water-food-energy-climate nexus and the increasing impact and frequency of weather extremes.” It therefore claims to provide a “platform to serve as a global public good,” thus articulating a position and agenda as altruistic as can possibly be imagined. The Planetary Skin Institute works with  “research and development partners across multiple sectors regionally and globally to identify, conceptualize, and incubate replicable and scalable big data and associated innovations, that could significantly increase the resilience of low-income communities, increase food, water, and energy security and protect key ecosystems and biodiversity”. What it does not to mention is the potential for resource futures investment that could accompany such data and information. This reveals the large-scale drive from all sectors to monetize or weaponize all aspects of the world.

The Planetary Skin Institute’s system echoes what a number of other remote automated sensing systems provide in terms of real-time, tele-tracking occurrences in many parts of the globe. The slogan for the institute is “sense, predict, act,” which is what AI weapons systems do, automatically and autonomously. Autonomous weapons are said to be “a third revolution in warfare, after gunpowder and nuclear arms” but such capacities for weapons have been around since at least 2002. At that time drones transitioned to being “smart weapons” and thus enabled to select their own targets to fire on (usually using GPS locations on hand-held devices). Geolocation based on SIM cards is now also used in U.S. drone assassination operations.

Instead of only about speculations concerning the future, autonomous systems have an institutional legacy as part of the Cold War. They are part of our inheritance from WWII and Cold War complex systems interacting between university, corporate and military based R&D. Such agencies as the American DARPA are the legacy of the Cold War, founded in 1958 but still very active as a high risk, high gain-sort of a model for speculative research.

The R&D innovation work is also spread out to the wider private sector through funding schemes and competitions. This illuminates essentially the continuation of the Cold War schemes also in the current private sector development work: “the security industry” is already structurally so tied to the governmental policies, military planning and to economic development that to ask about banning AI weaponry is to point to the wider questions about the political and economic systems that support military technologies as economically lucrative area of industry. Author E.L. Doctorow once summarised the nuclear bomb in relation to its historical context in the following manner: “First, the bomb was our weapon. Then it became our foreign policy. Then it became our economy.” We need to be able to critically evaluate the same triangle as part of autonomous weapons development that is not merely about the technology but indeed about policies and politics, and increasingly, economies and economics.