04 ಕಥೆ Kathe (Story) / Dematerialise

a pixelated form of a hand is in the foreground in bright yellow and orange. It is set against a grey checkered background and to its left are an assortment of individual pixels in bright red. Tinges of blue appear throughout the image.
Image still from Dematerialise by Vishal Kumaraswamy

As an artist working with experimental technologies, hacking/re-purposing tools to create artistic works I’m often looking for ways in which I can create intimate shared experiences. Even before the pandemic, a lot of my practice was being conducted solely through computer based interactions due to a lack of funding and other resources. This mode of working allowed me to focus my practice towards making accessible works and I began thinking about the language, technology and context accessibility of my works within a larger contemporary art conversation.

Continue reading “04 ಕಥೆ Kathe (Story) / Dematerialise”

03 your dataset won’t let me thrive / your dataset must die

‘your dataset won’t let me thrive / your dataset must die’ are a pair of video essays that seek to counter the mythologies surrounding Artificial Intelligence datasets & algorithms They are carried out as a comparative study of the works of the Black Beat Poet Bob Kaufman and the Kannada Dalit poet Siddalingaiah whose words (translation) are input into the text based neural network GPT-2.  The visual aesthetics of the work are drawn from generative AI imagery of brown faces, creative programming as well as animated representation of the words of each poet alongside text generated by the algorithm. The inability of the algorithm to generate text drawn from sufficient references to Black & Bahujan lived experiences reveal the encoded biases within the dataset and trace their origins to harmful mythologies of Caste & Race.

The works were commissioned by the Mozilla Foundation as part of the Reclaiming AI Futures project for the AI Observatory (https://ai-observatory.in/)

The image is a screengrab from the video 'your dataset won't let me thrive' and contains text laid against a black background with some generative abstract imagery. The text reads 'Abomunists Join Nothing But Their Hands or Legs, or Other Same'
Screengrab from ‘your dataset won’t let me thrive’

The image is a screengrab from the video 'your dataset must die' and contains abstract imagery of an AI generated face set against a dark blue background'
Screengrab from ‘your dataset must die’

02 Swaayattate (Autonomy)

The image contains a large eyeball with a pinkish cornea set against a black background.

In 2020, I was able to bring the ideas behind Subaltern Futurism as a speculative framework into my practice through my work Swaayattate (Autonomy). The work is an investigation into the complex entanglements of the synthetic and organic worlds. Taking the form of a bi-lingual trilogy, the films are set inside a computer repair marketplace in Bangalore and examines the nature of human-machine relationships through the contemporary lens of gender, caste and labour. The narrative moves between multiple timelines as the evolution of an embedded neural network references prescient concerns around language, accessibility and justice.

Portions of the script for the films were written in collaboration with a text based neural network GPT-2 Transformer (https://transformer.huggingface.co/) extracting & revealing the extent of encoded biases within this AI model. GPT-2 is essentially a text generator similar to the autocomplete functions on our phones. You can input words or sentences and the neural network generates the next word or sentence using pre trained machine-learning models. Widely hailed as being very close to human speech and syntax, my interactions with the language model has proven this to be highly misleading as they contain encoded biases brought over from the subjectivity of the programmers and its own training data. Chapter 2 ADI, speculates upon this process of transference of social biases into algorithmic ones.

Excerpt from Swaayattate (Autonomy)

“Hello work”

This morning I came across the following article related to problems with machine translation (which I’ve been referring to as AI transcription). This has been flagged by language professionals in Japan; it was interesting to see them dealing with the same perceptions of mistranslation as just amusing – they raise the problem of the dangers with miscommunication.

“…the group is most concerned about the negative impact that official miscommunications could have on tourism and Japan’s growing foreign community in the case of an earthquake or a medical emergency.”

Living = Dark Matter

“The official website of Meguro ward in Tokyo, for example, renders kurashi – or “living” – as “dark matter”, while the Kobe municipal government, turns sumai (home) as “I’m sorry”, the machine translation having apparently misread the original word as sumanai, a casual form of apology.”

www.theguardian.com/world/2020/nov/18/hello-work-or-job-centre-language-experts-japan-english

“He maybe did” or “He may be dead”?

2020 has been a year of information distortion for many people in many ways. Being deaf means my focus has been around digital and physical access – or lack of access – to information, something I’ll be exploring during the residency.  Originally, I had been viewing my planned areas of research – typography and dummy text, AI transcription, and sound effect captioning – as distinct, albeit with some loose links. Maybe these areas have more in common than I envisaged?

I’ve begun looking at Dr Rachael Tatman’s research around conversational AI and linguistics and have been struck by the similarities between the areas of error in AI, conventional listening, and lipreading. These are often mirrored in typographic and compositing errors.

The images in this post are taken from

  • Lipreading – A Guide for Beginners
  • Finer Points in the Spacing and Arrangement of Type
  • An AI voice transcription programme

The header text on this post is taken from a research paper written by Dr Tatman in 2017.

Black text on white background discussing hyphenated works. The example given is "camellia" This word is split across two lines so that the word initially reads "camel-".
Dark blue text on white background from AI transcription programme. The text does not make sense showing the translation is distorted

AI

I started using an AI to create imagery which mashes together images by multiple authors to produce endless variations through infinite combinations. Creating hybrid visions of chimeras, phantasms and abstractions, the AI uses a biological labelling system for it’s creative process – you can ‘edit genes’ and crossbreed, as well as view the family tree of image histories and relationships. Computation strives for biological variety.

These images are difficult to identify and label, to me they look organic – like many different lifeforms mixed together. I am interested in organisms which sit outside our usual frames of reference, or that which are difficult to scientifically label. Much of life on earth hasn’t been discovered, let alone named, and I am interested in the limits of our human understanding through our technologies.

For this residency I would like to use several different viewpoints to categorise this imagery. I will do this in several ways – I will create a new pseudo-scienfic reference system, use multiple human subjects and an AI to describe them.

A grid of images generated by an AI. These look organic and painterly, some look like they have insect body parts, fur, mouths and eyes mashed together creating formless mutants.
A screengrab of a google image search for the Hourglass Trapdoor Spider, which has a pattern on it's abdomen which looks like an ancient Mayan symbol.
^^ Artist description