When sunspots and clouds look like nuclear missiles

Over the past couple of years I’ve been thinking a lot about how computers see the world – through machine vision technologies and various data analysis systems – and how this shapes our lives.

Facial recognition technologies used by police are found to falsely identify and criminalise people (cases of mistaken identity are as high as 93% and in another study were 81%). CV-sorting and hiring algorithms are given the power to select and choose job candidates (Amazon’s tool became biased against female candidates, and HireVue’s claimed to make predictions based on the candidate’s tone of voice and facial movements. But my favourite example is one that decided the ideal candidate would be called Jared and would play Lacrosse…)

There are algorithms that will automatically move you further down a medical waiting list, those that decide whether you have access to housing or a loan, and countless more examples.

The decisions that computers make are hugely consequential; they can’t be assumed to be accurate or infallible.

That’s why I was fascinated when I came across the 1983 story of Stanislav Petrov and how his questioning of what the computer claimed to see averted a nuclear missile strike capable of killing 50% of the US population. In this case, it was sunspots reflecting off high-altitude clouds that looked to the computer like an incoming missile attack. The system reported a high confidence reading that this was a definite attack, with no uncertainty.

The sun and clouds at the Autumn equinox became an act of war, in the eyes of the machine.

[Image attribution: Bass Photo Co Collection, Indiana Historical Society]

View post >

Clouds and control

This week I’ve been experimenting with layering up cloud imagery and the control dashboards used within missile control centres.

I’d been thinking about the false premise that more data guarantees more clarity, something that James Bridle talks about in his book New Dark Age. It’s a premise often used to bolster the perpetuation of surveillance technologies. But as Bridle alludes to, more data also means greater complexity and increased potential for confusion and comprehension. It is an uneasy paradox that destabilises the idea that more data enables us to see more clearly, when the reality is more cloudiness.

We have likely heard or experienced the temptations of the ‘big red button’ – do not press, urging us to do the opposite. But I’ve also been thinking that even the existence of a button sets the stage for the following events. It is there so the temptation is to use it (this is one of the ongoing arguments against the likes of Trident). So my experiments this week also looked at the aesthetics of disappearing dashboard controls, blurring into this clouded vision.

The images of dashboards are released under a Creative Commons license by photographer Todd Lapin and show the control panels within SF-88, a former Nike missile base in the Marin Headlands, US. Nike missiles were anti-aircraft missiles often equipped with nuclear weapons between the 1950s and 1970s during the Cold War. Here is a link to the Creative Commons license: https://creativecommons.org/licenses/by-nc/2.0/

The cloud videos are from a dataset created by the Multimodal Vision Research Laboratory (credit below) and also feature videos collected by Martin Setvak.

Cloud dataset credits: Jacobs N, Abrams A, Pless R. 2013. Two Cloud-Based Cues for Estimating Scene Structure and Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 35:2526–2538. DOI: 10.1109/TPAMI.2013.55, and Jacobs N, Bies B, Pless R. 2010. Using Cloud Shadows to Infer Scene Structure and Camera Calibration. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1102–1109. DOI: 10.1109/CVPR.2010.5540093.

View post >

Mistaken identities

Adversarial.io is a tool created to evade technologies of image recognition and reveal how differently machines and humans interpret images. Subtle noise added to images can completely alter how an algorithm will classify a photo, while in terms of human perception, there is little change to the original image.

I find it interesting in the context of my project because of the high confidence with which the M-10 computer declared its identification of the missiles (which turned out to be sunlight reflecting off clouds).

In my experiments, I gathered photos of nuclear missiles which, through such added noise, became sewing machines, freight cars, obelisks and totem poles in the eyes of computer vision.

Some of the nouns used to describe these missiles were quite obscure! I had to look up the definitions for: a stupa (a dome-like building usually containing relics and used for meditation), a barracouta (type of fish) and thresher (an agricultural machine for separating grains).

A grid of photos showing various nuclear missiles but which have been labelled as sewing machines, trailer trucks, obelisks and other names by an image recognition AI. The background is colourful and pixelated.

View post >