Machine Learned Futures
We are with Abelardo Gil-Fournier writing a text or two on questions of temporality in contemporary visual culture. Our specific angle is on (visual) forms of prediction and forecasting as they emerge in machine learning: planetary surface changes, traffic and autonomous cars, etc. Here’s the first bit of an article on the topic (forthcoming later we hope, both in German and English).
“’Visual hallucination of probable events’, or, on environments of images and machine learning”
I Introduction
Contemporary images come in many forms but also, importantly, in many times. Screens, interfaces, monitors, sensors and many other devices that are part of the infrastructure of knowledge build up many forms of data visualisation in so-called real-time. While data visualisation might not be that new of a technical form of organisation of information as images, it takes a particularly intensive temporal turn with networked data that has been discussed for example in contexts of financial speculation.[1] At the same time, these imaging devices are part of an infrastructure that does not merely observe the microtemporal moment of the “real”, but unfolds in the now-moment. In terms of geographical, geological and broadly speaking environmental monitoring, the now moment expands in to near-future scenarios in where other aspects, including imaginary are at play. Imaging becomes a form of nowcasting, exposing the importance of understanding change changing.
Here one thinks of Paul Virilio and how “environment control” functions through the photographic technical image. In Virilio’s narrative the connection of light (exposure), time and space are bundled up as part of the general argument about the disappearance of the spatio-temporal coordinates of the external world. From the real-space we move to the ‘real-time’ interface[2] and to analysis of how visual management detaches from the light of the sun, the time of the seasons, the longue duree of the planetary qualitative time to the internal mechanisms of calculation that pertain to electric and electronic light. Hence, the photographic image that is captured prescribes for Virilio the exposure of the world: it is an intake of time, and, an intake of light. Operating on the world as active optics, these intakes then become the defining temporal frame for how environments are framed and managed through operational images, to use Harun Farocki’s term, and which then operationalize how we see geographic spaces too. The time of photographic development (Niepce), or “cinematographic resolution of movement” (Lumière), or for that matter the “videographic high definition of a ‘real-time’ representation of appearances”[3] are part of Virilio’s broad chronology of time in technical media culture.
But what is at best implied in this cartography of active optics is the attention to mobilization of time as predictions and forecasts. For operations of time and production of times move from meteorological forecasting to computer models, and from computer models to a plethora of machine learning techniques that have become another site of transformation of what we used to call photography. Joanna Zylinska names this generative life of photography as its nonhuman realm of operations that rearranges the image further from any historical legacy of anthropocentrism to include a variety of other forms of action, representation and temporality.[4] The techniques of time and images push further what counts as operatively real, and what forms of technically induced hallucination – or, in short, in the context of this paper, machine learning – are part of current forms of production of information.
Also in information society, digital culture, images persist. They persist as markers of time in several senses that refer not only to what the image records – the photographic indexicality of a time passed nor the documentary status of images as used in various administrative and other contexts – but also what it predicts. Techniques of machine learning are one central aspect of the reformulation of images and their uses in contemporary culture: from video prediction of the complexity of multiple moving objects we call traffic (cars, pedestrians, etc.) to satellite imagery monitoring agricultural crop development and forest change. Such techniques have become one central example of where earth’s geological and geographical changes become understood through algorithmic time, and also where for instance the very rapidly changing vehicle traffic is treated alike as the much slower earth surface durations of crops. In all cases, a key aspect is the ability to perceive potential futures and fold them into the real-time decision-making mechanisms.
The computational microtemporality takes a futuristic turn; algorithmic processes of mobilizing datasets in machine learning become activated in different institutional context as scenarios, predictions and projections. Images run ahead of their own time as future-producing techniques.
Our article is interested in a distinct technique of imaging that speaks to the technical forms of time-critical images: Next Frame Prediction and the forms of predictive imagining employed in contemporary environmental images (such as agriculture and climate research). While questions about the “geopolitics of planetary modification”[5] have become a central aspect of how we think of the ontologies of materiality and the Earth as Kathryn Yusoff has demonstrated, we are interested in how these materialities are also produced on the level of images.
Real time data processing of the Earth not as a single view entity, but an intensively mapped set of relations that unfold in real time data visualisations becomes a central way of continuing the earlier more symbolic forms of imagery such as the Blue Marble.[6] Perhaps not deep time in the strictest geological terms, agricultural and other related environmental and geographical imaging are however one central way of understanding the visual culture of computational images that do not only record and represent, but predict and project as their modus operandi.
This text will focus on this temporality of the image that is part of these techniques from the microtemporal operation of Next Frame Prediction to how it resonates with contemporary space imaging practices. While the article is itself part of a larger project where we elaborate with theoretical humanities and artistic research methods the visual culture of environmental imaging, we are unable in this restricted space to engage with the multiple contexts of this aspect of visual culture. Hence we will focus on the question of computational microtime, the visualized and predicted Earth times, and the hinge at the centre of this: the predicted time that comes out as an image. The various chrono-techniques[7] that have entered the vocabulary of media studies are particularly apt in offering a cartography of what analytical procedures are at the back of producing time. Hence the issue is not only about what temporal processes are embedded in media technological operations, but what sounds like merely a tautological statement: what times are responsible for a production of time. What times of calculation produce imagined futures, statistically viable cases, predicted worlds? In other words, what microtemporal times are in our case at the back of a sense of a futurity that is conditioned in calculational, software based and dataset determined system?
[1] Sean Cubitt, Three Geomedia, in: Ctrl-Z 7, 2017.
[2] Paul Virilio, Polar Inertia, London-Thousand Oaks-New Delhi, 2000, S. 55.
[3] Ebenda, S.61
[4] See Joanna Zylinska, Nonhuman Photography, Cambridge (MA) 2017.
[5] Kathryn Yusoff, The Geoengine: geoengineering and the geopolitics of planetary modification, in: Environment and Planning a 45, 2013, S. 2799-2808.
[6] See also Benjamin Bratton, What We Do Is Secrete: On Virilio, Planetarity and Data Visualisation, in: John Armitage/Ryan Bishop (Hg.), Virilio and Visual Culture, Edinburgh 2013, S. 180–206, hier S. 200-203.
[7] Wolfgang Ernst, Chronopoetics. The Temporal Being and Operativity of Technological Media, London-New York 2016.
good stuff thanks, will the coming article be OA?
let’s see when we get there – it might come out in German first. Will try to find a way to make some of this OA for sure.
thanks that would be good