So, what’s this risograph thing?

Outline image of risograph printed in pink.
An outline image of a risograph machine

As the work I’m developing moves towards the print stage, it’s time to explain a little more about the risograph process.

My print and printmaking experience started with etching (particularly photo-etching), letterpress, and developed to include screenprinting, and various relief processes, before the transition to digital. Last year, I got the opportunity to develop some work with my friend and collaborator Ruth Jones, who suggested we learn to use the risograph process. This post uses images from developing that work.

Risograph is sometimes described as a hybrid of photocopying and screenprinting.  Like a photocopier, the image you are printing is placed on the glass bed at the top of the machine; paper feeds in from one side and comes out printed on the other. Inside the machine however, the printing process involves creating a type of stencil (referred to as the master). The stencil leaves gaps where the ink is pushed through creating the image. Risographs can also hold two colours of ink and the cartridges can be swapped, so it’s possible to print in one colour, then reload the paper and overprint in a second colour.

An orange repeated pattern printed on white card
Single colour print, waiting for the second print process

Riso uses soy oil-based ink and is low energy, so has some strong green credentials. Like many print processes, it’s possible to create very simple or much more complex prints, meaning it appeals to a broad range of artists who might create their source imagery in many different ways.  Because you’re using limited ink colours, colour mix becomes a bigger part of the creative process – for example, what might be blue in your original image could be printed as pink. The way the stencils and paper feed mechanisms work means that there’s always a degree of variation between prints, something printmakers are used to, but which is avoided in commercial print processes. You also get the sense of the way different machines operate, and the variations – something that printers and printmakers work with.

The machine I will be using at TOW currently contains two colours – black and orange – so I’m developing my imagery with this in mind. What I have to do is create a black and white positive for each colour layer – one that will print out black, and one that will print out orange.  Digital techniques can make producing these colour separations a bit easier, but a physical print process will always produce a different outcome to how something looks on a screen. Again, working with these variations becomes part of the creative process.

A stack of white cards printed with orange image and black text
A duotone print example

Using the space safely at the moment means minimising my time in the TOW print space, so I’ll be producing my colour separation positives at home, and I’ll only have short print runs. But I’ve got hold of a really nice paper recommended for riso printing, so am looking forward to some hands-on printing again!

View post >

“Hello work”

This morning I came across the following article related to problems with machine translation (which I’ve been referring to as AI transcription). This has been flagged by language professionals in Japan; it was interesting to see them dealing with the same perceptions of mistranslation as just amusing – they raise the problem of the dangers with miscommunication.

“…the group is most concerned about the negative impact that official miscommunications could have on tourism and Japan’s growing foreign community in the case of an earthquake or a medical emergency.”

Living = Dark Matter

“The official website of Meguro ward in Tokyo, for example, renders kurashi – or “living” – as “dark matter”, while the Kobe municipal government, turns sumai (home) as “I’m sorry”, the machine translation having apparently misread the original word as sumanai, a casual form of apology.”

www.theguardian.com/world/2020/nov/18/hello-work-or-job-centre-language-experts-japan-english

View post >

A Conversation (Word Salad)

This week I’ve been focusing on practicalities for making risograph prints. As my region (England) is in lockdown, this involves a lot of forward planning, material purchase and prep that’s not very interesting to read about. So I’ve also been experimenting with AI transcripts to create a little script from some of the most egregious examples of distortion (looking at you, Microsoft Teams).

This has been added as a document and images.

The aim is to give the reader a feel of what it’s like trying to make sense of a situation which isn’t accessible, and the impossibility of acting on information which doesn’t make sense. During the current pandemic, misunderstanding is not merely an amusing anecdote (a la Auberon Waugh’s tale of mishearing “press freedom” and delivering a lecture on “breast feeding”), it’s dangerous. Currently, the UK offers no BSL interpretation for government briefings. While the BBC provides interpreters for government announcements, not everyone has access to the BBC, and clips shown on social media won’t be inherently accessible. And scientific briefings are not shown with BSL interpretation. It’s the government’s responsibility, not the BBC’s. Too many organisations push the responsibility for accessing information onto individuals rather than thinking about how they can make it straightforward.

So, anyway, I’ve been tiger, because you know, the man who chases two hairs catches mom.

View post >

Lipreading in the age of COVID-19

I was taught lipreading before I began using BSL (British Sign Language) and use the two communication methods contiguously. The term “lipreading” is somewhat misleading, as it’s not just lip patterns that contribute to understanding – you pick up information from the rest of the face, from body language, and from the contextual environment (context is something I’ll be picking up on again). My biggest obstacle is dark sunglasses that block the information you see around people’s eyes; if lip patterns provide information on words, the eye area often gives the equivalent of tone of voice, and lipreading people wearing sunglasses translates as a monotone to me.

During 2020, the ratio has been flipped, with masks preventing lipreading, but often leaving the eyes and surrounding areas clear; I’ve started to notice that I still pick up information, so can (sometimes) recognise tone, even if there’s no way to pick up words. I’ve also noticed hearing people struggling with understanding (with masks muffling voices), and am wondering if this will impact on people’s thinking about hearing loss in the longer term. Will their own difficulties lead to more understanding? Will they turn to D/deaf people as communication experts they can learn from?

I’ve been playing with images from the 1986 edition of “Lipreading – A Guide for Beginners”, masking the lip patterns illustrated. I’m planning a duotone risograph print, perhaps using several of these images – due to local lockdown I can only prep my files at present, so anything shown at this stage is an approximation.

Black and white photo showing a woman's face. Superimposed is an orange facemask. Only her eyes are visible. On the bottom left are the letters "OO"t ar
“OO”- Lipreading in the age of COVID (study for risograph) 2020

View post >

“He maybe did” or “He may be dead”?

2020 has been a year of information distortion for many people in many ways. Being deaf means my focus has been around digital and physical access – or lack of access – to information, something I’ll be exploring during the residency.  Originally, I had been viewing my planned areas of research – typography and dummy text, AI transcription, and sound effect captioning – as distinct, albeit with some loose links. Maybe these areas have more in common than I envisaged?

I’ve begun looking at Dr Rachael Tatman’s research around conversational AI and linguistics and have been struck by the similarities between the areas of error in AI, conventional listening, and lipreading. These are often mirrored in typographic and compositing errors.

The images in this post are taken from

  • Lipreading – A Guide for Beginners
  • Finer Points in the Spacing and Arrangement of Type
  • An AI voice transcription programme

The header text on this post is taken from a research paper written by Dr Tatman in 2017.

Black text on white background discussing hyphenated works. The example given is "camellia" This word is split across two lines so that the word initially reads "camel-".
Dark blue text on white background from AI transcription programme. The text does not make sense showing the translation is distorted

View post >