Home > AI, algorithms, autonomous systems, cold war, military > Autonomous AI as Weapons, Policy and Economy

Autonomous AI as Weapons, Policy and Economy

WIth my colleague Ryan Bishop we did some popular writing over the summer and responded to the recent call to ban autonomous weapons systems. The open letter was widely discussed but usually with the same emphases, so we wanted to add our own flavour to the debate. What if they are already here? What if the media archaeology of autonomous weapons goes way back to the experimental weapons development started during the Cold War?

Here’s our short piece in The Conversation. It was rather heavily edited so I took the liberty to paste below the longer original version (not copyedited though).

__

Ryan Bishop and Jussi Parikka, Winchester School of Art/University of Southampton
Autonomous AI as Weapons, Policy and Economy

A significant cadre of scholars and corporate representatives recently signed an open letter to “ban on offensive autonomous weapons systems.”  The letter was widely publicised and supported by well-known figures from Stephen Hawking to Noam Chomsky, corporate influentials like Elon Musk, Google’s leading AI researcher Demis Hassabis and Apple co-founder Steve Wozniak. The letter received much attention in the news and social media with references to killer AI robots and mentions of The Terminator, adding a science-fictional flavour. But the core of the letter referred to an actual issue having to do with the possibilities of autonomous weapons becoming a wide-spread tool in larger conflicts and in various tasks “such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”

One can quibble little with the consciences on display here even if scholars such as Benjamin Bratton already earlier argued that we need to be aware of much wider questions about design and synthetic intelligence. Such issues cannot be reduced to the Terminator-imaginary and narcissistically assume that AI is out there to get us.  Scholars should anyway address the much longer backstory to autonomous weapons systems that make the issue as political as it is technological.

The letter concludes with the semi-Apocalyptic and not altogether inaccurate assertion that “The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.” However this not the endpoint but rather it is the starting point.

Unfortunately the AI global arms race has already started. The most worrying dimension of this AI arms race is that it does not always look like one. The division between defense and offensive weapons was already blurred during the Cold War.

The doctrine for pre-emptive strike laid waste to the difference between the two. The agile capacity to reprogram autonomous systems means all systems can be altered with relative ease, and the offensive/defensive distinction disappears even more fully.

The new weapons systems can look like the Planetary Skin Institute or the Central Nervous System for the Earth (by Hewlett-Packard), two of the many autonomous remote sensing systems that allow for automated real-time responses to the conditions they are meant to track. And to act on that information. Automatically.

In the present, platforms for planetary computing operate with and through remote sensing systems that gather together real-time data and of the earth for specific stakeholders through models and simulations. A system such as the Planetary Skin Institute, initiated by NASA and Cisco Systems, operates under the aegis of providing a multi-constituent platform for planetary eco-surveillance. It was originally designed to offer a real-time open network of simulated global ecological concerns, especially treaty verification, weather crises, carbon stocks and flows, risk identification and scenario planning and modeling for academic, corporate and government actors (thus replicating the US post World War II infrastructural strategy). It is within this context of autonomous remote sensing systems that AI weaponry must be understood; the hardware and software, as well as overall design and implementation, are the same for each. Similarly provenance for all of these resides primarily in Cold War systems designs and goals.

The Planetary Skin institute now operates as an independent non-profit global R & D organization with its stated goal of being dedicated to “improving the lives of millions of people by developing risk and resource management decision services to address the growing challenges of resource scarcity, the land-water-food-energy-climate nexus and the increasing impact and frequency of weather extremes.” It therefore claims to provide a “platform to serve as a global public good,” thus articulating a position and agenda as altruistic as can possibly be imagined. The Planetary Skin Institute works with  “research and development partners across multiple sectors regionally and globally to identify, conceptualize, and incubate replicable and scalable big data and associated innovations, that could significantly increase the resilience of low-income communities, increase food, water, and energy security and protect key ecosystems and biodiversity”. What it does not to mention is the potential for resource futures investment that could accompany such data and information. This reveals the large-scale drive from all sectors to monetize or weaponize all aspects of the world.

The Planetary Skin Institute’s system echoes what a number of other remote automated sensing systems provide in terms of real-time, tele-tracking occurrences in many parts of the globe. The slogan for the institute is “sense, predict, act,” which is what AI weapons systems do, automatically and autonomously. Autonomous weapons are said to be “a third revolution in warfare, after gunpowder and nuclear arms” but such capacities for weapons have been around since at least 2002. At that time drones transitioned to being “smart weapons” and thus enabled to select their own targets to fire on (usually using GPS locations on hand-held devices). Geolocation based on SIM cards is now also used in U.S. drone assassination operations.

Instead of only about speculations concerning the future, autonomous systems have an institutional legacy as part of the Cold War. They are part of our inheritance from WWII and Cold War complex systems interacting between university, corporate and military based R&D. Such agencies as the American DARPA are the legacy of the Cold War, founded in 1958 but still very active as a high risk, high gain-sort of a model for speculative research.

The R&D innovation work is also spread out to the wider private sector through funding schemes and competitions. This illuminates essentially the continuation of the Cold War schemes also in the current private sector development work: “the security industry” is already structurally so tied to the governmental policies, military planning and to economic development that to ask about banning AI weaponry is to point to the wider questions about the political and economic systems that support military technologies as economically lucrative area of industry. Author E.L. Doctorow once summarised the nuclear bomb in relation to its historical context in the following manner: “First, the bomb was our weapon. Then it became our foreign policy. Then it became our economy.” We need to be able to critically evaluate the same triangle as part of autonomous weapons development that is not merely about the technology but indeed about policies and politics, and increasingly, economies and economics.

  1. john
    June 30, 2018 at 1:39 am

    thanks

  1. January 4, 2022 at 11:21 am

Leave a comment