Lipreading in the age of COVID-19

I was taught lipreading before I began using BSL (British Sign Language) and use the two communication methods contiguously. The term “lipreading” is somewhat misleading, as it’s not just lip patterns that contribute to understanding – you pick up information from the rest of the face, from body language, and from the contextual environment (context is something I’ll be picking up on again). My biggest obstacle is dark sunglasses that block the information you see around people’s eyes; if lip patterns provide information on words, the eye area often gives the equivalent of tone of voice, and lipreading people wearing sunglasses translates as a monotone to me.

During 2020, the ratio has been flipped, with masks preventing lipreading, but often leaving the eyes and surrounding areas clear; I’ve started to notice that I still pick up information, so can (sometimes) recognise tone, even if there’s no way to pick up words. I’ve also noticed hearing people struggling with understanding (with masks muffling voices), and am wondering if this will impact on people’s thinking about hearing loss in the longer term. Will their own difficulties lead to more understanding? Will they turn to D/deaf people as communication experts they can learn from?

I’ve been playing with images from the 1986 edition of “Lipreading – A Guide for Beginners”, masking the lip patterns illustrated. I’m planning a duotone risograph print, perhaps using several of these images – due to local lockdown I can only prep my files at present, so anything shown at this stage is an approximation.

Black and white photo showing a woman's face. Superimposed is an orange facemask. Only her eyes are visible. On the bottom left are the letters "OO"t ar
“OO”- Lipreading in the age of COVID (study for risograph) 2020

“He maybe did” or “He may be dead”?

2020 has been a year of information distortion for many people in many ways. Being deaf means my focus has been around digital and physical access – or lack of access – to information, something I’ll be exploring during the residency.  Originally, I had been viewing my planned areas of research – typography and dummy text, AI transcription, and sound effect captioning – as distinct, albeit with some loose links. Maybe these areas have more in common than I envisaged?

I’ve begun looking at Dr Rachael Tatman’s research around conversational AI and linguistics and have been struck by the similarities between the areas of error in AI, conventional listening, and lipreading. These are often mirrored in typographic and compositing errors.

The images in this post are taken from

  • Lipreading – A Guide for Beginners
  • Finer Points in the Spacing and Arrangement of Type
  • An AI voice transcription programme

The header text on this post is taken from a research paper written by Dr Tatman in 2017.

Black text on white background discussing hyphenated works. The example given is "camellia" This word is split across two lines so that the word initially reads "camel-".
Dark blue text on white background from AI transcription programme. The text does not make sense showing the translation is distorted