The moose doesn’t strike any emotion.

Although most recently I’ve been focusing on creating the residency risograph prints, this post returns to the idea of context as a fundamental factor in untangling lipreading, typographic and AI errors. Lipreading is often like completing a giant freeform crossword puzzle; filling in one set of clues reduces the parameters for the next set, and the next. Context enables you to instantly discard broader variables and zoom in on the most likely possibilities. Working with BSL/SSE interpreters does the same thing; while “vignette” and “finesse” will create almost identical lip patterns, the BSL interpretations use completely different hand shapes and movements.

Lipreading is physically impossible with masks, but there’s also loss of facial expression giving tone & context. But there are still contextual clues; where are you, what’s happening around you?

The same issues seem to be occurring with AI transcription; I examine the context in which errors occur to try to work out what the most likely interpretation is. This is a real challenge as you’ve a tiny amount of time to expend before the conversation moves on – with possibly another set of mangled meaning to decipher. The last work I’ve made looks at this mental juggling by using the risograph duotone process to “correct” mangled AI using proofreaders marks, to represent the two simultaneous thought processes.  That work will be in the residency exhibition, but here’s a few puzzles for you to be going on with.

“The moose doesn’t strike any emotion.”

“The perfect cinnamon.”

“I know there is a framework for achieving Oswald by doing do command Libra.”

View post >

So, what’s this risograph thing?

Outline image of risograph printed in pink.
An outline image of a risograph machine

As the work I’m developing moves towards the print stage, it’s time to explain a little more about the risograph process.

My print and printmaking experience started with etching (particularly photo-etching), letterpress, and developed to include screenprinting, and various relief processes, before the transition to digital. Last year, I got the opportunity to develop some work with my friend and collaborator Ruth Jones, who suggested we learn to use the risograph process. This post uses images from developing that work.

Continue reading “So, what’s this risograph thing?”

View post >

“Hello work”

This morning I came across the following article related to problems with machine translation (which I’ve been referring to as AI transcription). This has been flagged by language professionals in Japan; it was interesting to see them dealing with the same perceptions of mistranslation as just amusing – they raise the problem of the dangers with miscommunication.

“…the group is most concerned about the negative impact that official miscommunications could have on tourism and Japan’s growing foreign community in the case of an earthquake or a medical emergency.”

Living = Dark Matter

“The official website of Meguro ward in Tokyo, for example, renders kurashi – or “living” – as “dark matter”, while the Kobe municipal government, turns sumai (home) as “I’m sorry”, the machine translation having apparently misread the original word as sumanai, a casual form of apology.”

www.theguardian.com/world/2020/nov/18/hello-work-or-job-centre-language-experts-japan-english

View post >

A Conversation (Word Salad)

This week I’ve been focusing on practicalities for making risograph prints. As my region (England) is in lockdown, this involves a lot of forward planning, material purchase and prep that’s not very interesting to read about. So I’ve also been experimenting with AI transcripts to create a little script from some of the most egregious examples of distortion (looking at you, Microsoft Teams).

This has been added as a document and images.

The aim is to give the reader a feel of what it’s like trying to make sense of a situation which isn’t accessible, and the impossibility of acting on information which doesn’t make sense. During the current pandemic, misunderstanding is not merely an amusing anecdote (a la Auberon Waugh’s tale of mishearing “press freedom” and delivering a lecture on “breast feeding”), it’s dangerous. Currently, the UK offers no BSL interpretation for government briefings. While the BBC provides interpreters for government announcements, not everyone has access to the BBC, and clips shown on social media won’t be inherently accessible. And scientific briefings are not shown with BSL interpretation. It’s the government’s responsibility, not the BBC’s. Too many organisations push the responsibility for accessing information onto individuals rather than thinking about how they can make it straightforward.

So, anyway, I’ve been tiger, because you know, the man who chases two hairs catches mom.

View post >

Lipreading in the age of COVID-19

I was taught lipreading before I began using BSL (British Sign Language) and use the two communication methods contiguously. The term “lipreading” is somewhat misleading, as it’s not just lip patterns that contribute to understanding – you pick up information from the rest of the face, from body language, and from the contextual environment (context is something I’ll be picking up on again). My biggest obstacle is dark sunglasses that block the information you see around people’s eyes; if lip patterns provide information on words, the eye area often gives the equivalent of tone of voice, and lipreading people wearing sunglasses translates as a monotone to me.

During 2020, the ratio has been flipped, with masks preventing lipreading, but often leaving the eyes and surrounding areas clear; I’ve started to notice that I still pick up information, so can (sometimes) recognise tone, even if there’s no way to pick up words. I’ve also noticed hearing people struggling with understanding (with masks muffling voices), and am wondering if this will impact on people’s thinking about hearing loss in the longer term. Will their own difficulties lead to more understanding? Will they turn to D/deaf people as communication experts they can learn from?

I’ve been playing with images from the 1986 edition of “Lipreading – A Guide for Beginners”, masking the lip patterns illustrated. I’m planning a duotone risograph print, perhaps using several of these images – due to local lockdown I can only prep my files at present, so anything shown at this stage is an approximation.

Black and white photo showing a woman's face. Superimposed is an orange facemask. Only her eyes are visible. On the bottom left are the letters "OO"t ar
“OO”- Lipreading in the age of COVID (study for risograph) 2020

View post >

“He maybe did” or “He may be dead”?

2020 has been a year of information distortion for many people in many ways. Being deaf means my focus has been around digital and physical access – or lack of access – to information, something I’ll be exploring during the residency.  Originally, I had been viewing my planned areas of research – typography and dummy text, AI transcription, and sound effect captioning – as distinct, albeit with some loose links. Maybe these areas have more in common than I envisaged?

I’ve begun looking at Dr Rachael Tatman’s research around conversational AI and linguistics and have been struck by the similarities between the areas of error in AI, conventional listening, and lipreading. These are often mirrored in typographic and compositing errors.

The images in this post are taken from

  • Lipreading – A Guide for Beginners
  • Finer Points in the Spacing and Arrangement of Type
  • An AI voice transcription programme

The header text on this post is taken from a research paper written by Dr Tatman in 2017.

Black text on white background discussing hyphenated works. The example given is "camellia" This word is split across two lines so that the word initially reads "camel-".
Dark blue text on white background from AI transcription programme. The text does not make sense showing the translation is distorted

View post >