“Hello work”

This morning I came across the following article related to problems with machine translation (which I’ve been referring to as AI transcription). This has been flagged by language professionals in Japan; it was interesting to see them dealing with the same perceptions of mistranslation as just amusing – they raise the problem of the dangers with miscommunication.

“…the group is most concerned about the negative impact that official miscommunications could have on tourism and Japan’s growing foreign community in the case of an earthquake or a medical emergency.”

Living = Dark Matter

“The official website of Meguro ward in Tokyo, for example, renders kurashi – or “living” – as “dark matter”, while the Kobe municipal government, turns sumai (home) as “I’m sorry”, the machine translation having apparently misread the original word as sumanai, a casual form of apology.”


“He maybe did” or “He may be dead”?

2020 has been a year of information distortion for many people in many ways. Being deaf means my focus has been around digital and physical access – or lack of access – to information, something I’ll be exploring during the residency.  Originally, I had been viewing my planned areas of research – typography and dummy text, AI transcription, and sound effect captioning – as distinct, albeit with some loose links. Maybe these areas have more in common than I envisaged?

I’ve begun looking at Dr Rachael Tatman’s research around conversational AI and linguistics and have been struck by the similarities between the areas of error in AI, conventional listening, and lipreading. These are often mirrored in typographic and compositing errors.

The images in this post are taken from

  • Lipreading – A Guide for Beginners
  • Finer Points in the Spacing and Arrangement of Type
  • An AI voice transcription programme

The header text on this post is taken from a research paper written by Dr Tatman in 2017.

Black text on white background discussing hyphenated works. The example given is "camellia" This word is split across two lines so that the word initially reads "camel-".
Dark blue text on white background from AI transcription programme. The text does not make sense showing the translation is distorted


I started using an AI to create imagery which mashes together images by multiple authors to produce endless variations through infinite combinations. Creating hybrid visions of chimeras, phantasms and abstractions, the AI uses a biological labelling system for it’s creative process – you can ‘edit genes’ and crossbreed, as well as view the family tree of image histories and relationships. Computation strives for biological variety.

These images are difficult to identify and label, to me they look organic – like many different lifeforms mixed together. I am interested in organisms which sit outside our usual frames of reference, or that which are difficult to scientifically label. Much of life on earth hasn’t been discovered, let alone named, and I am interested in the limits of our human understanding through our technologies.

For this residency I would like to use several different viewpoints to categorise this imagery. I will do this in several ways – I will create a new pseudo-scienfic reference system, use multiple human subjects and an AI to describe them.

A grid of images generated by an AI. These look organic and painterly, some look like they have insect body parts, fur, mouths and eyes mashed together creating formless mutants.
A screengrab of a google image search for the Hourglass Trapdoor Spider, which has a pattern on it's abdomen which looks like an ancient Mayan symbol.
^^ Artist description