Apophany vs Epiphany

It’s a ‘collotype’, which basically looks like a fuzzy x-ray, of a pelvis. Smudgy black and white, it’s not immediately obvious to non-medically trained eyes what we’re looking at, but after a while it seems that it’s upside-down, with the little tail bone sticking upwards, hips curling inwards.

This is at the core of what I’m working on. 

Jenn Ashworth proposes the idea of apopheny vs epiphany in her 2019 memoir Notes Made While Falling. In the book, she describes a traumatic birth followed by a mental and physical unravelling. Part of this is apophenia – a kind of logic that connects patterns; names; numbers; anything, which consensus reality would say is not actually connected. Most often it’s applied as a pathology, often in reference to conspiracy theories. Jenn says ‘I can’t stop connecting things’, and rather than self-pathologising, sets this beside her religious background to think about this experience of apopheny alongside the experience of epiphany – the realisation of a profound and real truth. But how do you know which is happening within you?

This idea has been on my mind since I read the book (and reread it, and reread it), and it’s the heart of the work I’m making for Vital Capacities. The work I make here will also feed into a solo show of mine opening early next year, so it’s really good to be able to delve in deeply in this public way – a rare chance to be able to share the intricacies of a looottt of research, experimentation, and thought.

Image Credit: This image is featured on the front cover of ‘Notes Made While Falling’ More info: A woman’s pelvis after a pubiotomy – to widen the birth canal for a vaginal delivery of a baby. Collotype by Römmler & Jonas after a radiograph made for G. Leopold and Th. Leisewitz, 1908.

Image Description: It’s a ‘collotype’, which basically looks like a fuzzy x-ray, of a pelvis. Smudgy black and white, it’s not immediately obvious to non-medically trained eyes what we’re looking at, but after a while it seems that it’s upside-down, with the little tail bone sticking upwards, hips curling inwards.

Catastrophic Forgetting

The contemporary world is becoming increasingly influenced by artificial intelligence (AI) models, some of which are known as ‘world models’. While these concepts gained significant attention in 2025, their origins can be traced much further back. Jürgen Schmidhuber introduced the foundational architecture for planning and reinforcement learning (RL) involving interacting recurrent neural networks (RNNs)—including a controller and a world model—in 1990. In this framework, the world model serves as an internal, differentiable simulation of the environment, learning to predict the outcomes of the controller’s actions. By simulating these action-consequence chains internally, the agent can plan and optimise its decisions. This approach is now commonly used in video prediction and simulations within game engines, yet it remains closely related to cameras and image processing.

Despite the advancements made, a fundamental limitation first identified in 1990 continues to challenge the progress: the problem of instability and catastrophic forgetting. As the controller guides the agent into new areas of experience, there is a risk that the network will overwrite previously learned knowledge. This leads to fragile, non-lifelong learning, where new information erases older representations. Furthermore, Prof. Yann LeCun mentioned in his presentation ‘The Shape of AI to Come’ at the AI Action Summit’s Science Days at the Institut Polytechnique de Paris (IP Paris) that the volume of data that a large language model contains remains minimal compared to that of a four-year-old child’s data volume, as 1.1E14 bytes. One of the titles of his slides that has stayed in my mind is “Auto-Regressive Generative Models Suck!” In the area of reinforcement learning, the AI’s policies often remain static, unable to adapt to the unforeseen complexities of the real world — in other words, the AI does not learn after the training process. Recently, emerging paradigms like Liquid Neural Networks (Ramin Hasani) and Dynamic Deep Learning (Richard Sutton) attempt to address this rigidity. However, those approaches are still highly reliant on randomly selecting and cleaning a neural network inside, to maintain the learning dynamic and potentially improve real-time reaction and long-term learning. Nevertheless, they are still facing challenges in solving the problem of AI’s hallucinations. A fundamental paradigm shift for AI is needed in our time, but it takes time, and before that, this paradigm may already be overwhelming for both machines and humans.

Welcome

Photographed by YEUNG Tsz Ying


Hi, welcome to my online space. My name is Lazarus Chan, and I am a new media artist now based in Hong Kong.

My work crosses the intersections of science, technology, the humanities, and art, and I am particularly fascinated by themes of consciousness, time, and artificial intelligence. Much of my work is related to generative art and installations, posing questions from a humanistic and philosophical standpoint. I will create a new work, “Stochastic Camera (version 0.4) – weight of witness”, in August 2025, and you can follow my progress here.

This studio is an online space for me to continue my explorations and share my thoughts with you. I will write a series of posts, sharing my ongoing reflections on AI as it becomes more integrated into our world.

You are welcome to leave a comment and send me questions.

Lazarus Chan

More about the photo:
https://www.lazaruschan.com/artwork/golem-wander-in-crossroads

WELCOME TO MY STUDIO

Leah crouching down next to a very large dibond photograph. She's a white woman with dark blonde hair up in a bun, wearing all black. The photograph is 2metres tall and 3metres wide, and rests on white foam blocks. It is mostly darkness and shade, but at its left side glows into a warm red flare, over rippling sunlit water. The room might be a studio or domestic dating, with wooden floors and a closed door, framed photo on the wall, and a door buzzer phone on the left.

Hi, I’m Leah Clements, an artist from and based in East London. 

I’m really interested in moments of transcendence – these might be near death or out of body experiences, or profound shifts in psychology or physicality. Sometimes this is very scientific, sometimes it veers towards the paranormal. I want to get to these moments where you have one foot in this world and the other in another.

A lot of this comes from being chronically ill myself, but this point of departure usually expands outwards into other people’s experiences, with an intent to re-collectivise.

During the residency I’m planning to work on a new moving image scene where icons from ancient, medieval, and contemporary sites will come together to forge a new symbol. This is growing out of my research into apopheny vs epiphany (more on that later!)  and thinking about how we look for meaning when it feels lacking, and whether we can know if we’re looking for it in the right or wrong places. It will give me a chance to try (and play about with) a new technique of animation in film editing. 

Comment any thoughts you have, or ask me questions – I look forward to sharing some of my research into this and talking with you about it…

Welcome to my Studio

Hi, I’m Elora Kadir and welcome to my studio. I’m an artist based in London, working across installation, drawing, photography, video, and found objects. My work centres around lived experience with disability, and the subtle ways it interacts with the world around us—whether that’s through navigating buildings, dealing with forms, or the quiet tensions that arise from systems designed for able-bodied norms.

During this residency, I’m planning to explore moments of dissonance – those small, often overlooked points where body and environment don’t quite align. I’m interested in how access (or lack of it) gets embedded into the everyday, and how art can make these frictions visible. I’ll be experimenting with materials, images, and texts to reflect on these encounters. I’m also curious as to how it will feel to be a part of this virtual space and what influence that may have (if any!) on my process.

Feel free to have a look around and leave a comment or message.

Warm golden sunlight streams through a window, casting the shadow of the window frame and two potted plants onto a beige wall. The shadows are clear and rectangular, creating a peaceful, minimalist composition. Part of a bed with soft, rumpled blankets and clothes is visible at the bottom of the image, adding a lived-in, cozy atmosphere. A curtain hangs partially open to the right, revealing a sliver of the window.
Window, 2022

Welcome to my studio

I’m excited to be part of this residency and to have the opportunity to focus, reflect, and share my process in a more open way than I typically allow.

Over the next few weeks, I’ll be using this time and space to explore how repetition, memory, and digital tools intersect — particularly how small acts of making can accumulate into something slower, quieter, and more intentional.

My practice often shifts between design and more contemplative visual work, and I’m interested in what happens when the usual boundaries blur — when something functional begins to feel poetic, or when a mistake reveals a new direction.

This residency is as much about paying attention as it is about producing. I’ll be sharing fragments, trials, and thoughts along the way.

Playing Back Sounds at SPECIFIC Timestamps with JavaScript! ?

Hello! So this has been super cool and fun, hope you find it fun too! I’ve been working with JavaScript to make sure we are playing sounds at the exact right times!

Imagine you have two different sound patterns:

  • Doom (1-second pause) Doom
  • Doom (half a second pause) Doom

To make this work, we need to play these sounds back at just the right moments. We can use a bit of JavaScript magic to play sounds at specific timestamps. Check out this script:

Code snippet in a dark-themed editor showing HTML and JavaScript for a web page titled 'Play Sounds at Timestamps'. The HTML structure includes a button to play sounds, and the JavaScript code defines an array of sound data with timestamps and a function to play these sounds at specified times using the Audio API and setTimeout.
An accessible version of the code is printed in full at the bottom of the page

Breaking It Down

  1. HTML Structure: We have an HTML page with a button that says “Play Sounds.” When you click it, the playSounds() function runs.
  2. Sound Data: The soundData array lists our sounds (doom and tak real precaution sounds!) with their sources and the times they should play.
  3. playSounds Function: This function goes through each sound in soundData and uses setTimeout to play each one at the right time.

Our machine learning model first identifies when each sound occurs which we then use to export the timestamps to create our soundData array. Using JavaScript, we can now play the sounds exactly when the AI model hears them! Yay! ???


Here’s an accessible version of the code you can copy and paste if you’d like to give it a go yourself!

Click here if you want to copy and paste the code to try out for yourself! Show code
<!DOCTYPE html>
<html>
<head>
  <title>Play Sounds at Timestamps</title>
</head>
<body>
  <h1>Play Sounds at Timestamps</h1>
  <button onclick="playSounds()">Play Sounds</button>

  <script>
    const soundData = [
      { "src": "sound1.mp3", "timestamp": 1000 },
      { "src": "sound2.mp3", "timestamp": 3000 },
      { "src": "sound3.mp3", "timestamp": 5000 }
    ];

    function playSounds() {
      const audioElements = soundData.map(data => {
        const audio = new Audio(data.src);
        setTimeout(() => {
          audio.play();
        }, data.timestamp);
        return audio;
      });
    }
  </script>
</body>
</html>

Un-making becomes un-gardening…

I thought about Polly Atkin’s poem Unwalking (referenced in my last post) a lot when i first returned back to my allotment plot in January 2022, having not visiting the site for 2 years due to shielding. We had been given formal notice by the allotment committee to either improve the plot or end our tenancy. I was in the midst of an intensely difficult period in my life, and I was unsure of whether the commitment was possible to continue. i was trying to figure out having a career as an artist whilst being sick, and how to do those things in a sustainable way that doesn’t just leave you burnt out. I believe that this figuring out will be a lifelong mission, one that never has a fixed answer. I still go through periods (just very recently in fact), where it feels like living in this body feels truly incompatible with a career as an artist. But what did become clear when returning to my allotment in 2022, was that in rediscovering my gardening practice, I could do something more than just survive my body and my job; i could build something bigger, something beyond myself. I decided to give myself 6 weeks and to see what might happen…

A digital photograph, landscape, colour, of what looks like a messy allotment plot on a cold winters day. It is a bright sunny day at what looks like sunset. The plot has all the usual allotment features like raised beds, sheds etc, and there is lots of bare patches and overgrown patches. The scene is calm, ordinary and quite beautiful in the winter light
The allotment in January 2022

This was the first photo i took of what the plot looked like when I first returned back in January 2022. It was such a special afternoon. it was a weekday and I had been working, and my mum had asked if I wanted to go to the plot just to have a look. I was reluctant. part of having an energy-limiting condition means that i never know when i am over exerting myself, and i am always second guessing myself as to whether or not the thing i did is what made my pain worse. It’s a particularly challenging aspect of living with sickness, and something I find really hard within the context of a career. So re-engaging with the allotment again on a normal working day felt pretty extreme; simply leaving the house and turning up felt like i was pushing my boundaries of what was possible (it always does). But that afternoon, i felt the spark of what has always drawn me to gardening, and amongst all the overgrown weeds and debris, i felt excited to think what might be possible here.

Polly’s poem Unwalking was in my head a lot as we began grappling with how to go about using the space again. It became clear quite quickly, that the only way to manage the space at this point – whilst existing in crip-time – was to cover most of it up. So that’s what the first year was spent doing; taking things down and very slowly mulching and covering the beds. We began by adopting a no-dig approach by placing cardboard over the beds, then covering them in a mulch of compost or manure.

A digital photograph (portrait, colour), of what looks like some kind of garden or allotment plot on a winters day. There are bare trees in the background, and in front of them is a polly-tunnel covered in green mesh and some plants growing inside. In the foreground is a large rectangular raised bed covered in cardboard and what looks like a scattering of compost.
The first bed at the allotment with cardboard over it and a sprinkling of mulch
a digital photograph (portrait, colour), of what looks like a close up detail of some card board and paper on the ground, with big clods of compost or dirt scattered on top.
the first scatterings of mulch on cardboard
a digital photograph (portrait, colour), of what looks like a close up detail of some card board and paper on the ground, with big clods of compost or dirt scattered on top.
mulching as mark-making
A digital photograph (portrait, colour), of what looks like some kind of garden or allotment plot on a winters day. There are bare trees in the background, and in front of them is a poly-tunnel covered in green mesh and some plants growing inside. In the foreground is a large rectangular raised bed covered in cardboard and what looks like a heavy mulch of compost.
The first bed of the plot part-mulched with cardboard underneath

This mulching process was so exciting; it felt like i was making these large scale collages with muck and cardboard. Again, scale becomes a really exciting component of what draws me to this gardening practice. I love feeling in awe of bigness, to feel like i’m in the presence of something much, much bigger than me. I think i’m always searching (even in the smallest of artworks i make) for the feeling i get when i’m next to a huge lump of gritstone rock at Curbar Edge, my local rock face in the Peak District National Park. It’s the same feeling i get when i experience Wolfgang Tillmans work in the flesh, where the bigness just carries me away across landscapes and into another space. When mulching and covering the beds at the allotment, all of a sudden it moved beyond an ordinary gardening task and became a kind of space-making.

I often describe my work building the rose garden (and maintaining an allotment plot in general) as totally absurd; trying to get my sick and tired body to sculpt this huge space simply feels a bit ridiculous when met with what that space demands of my very limited energy. It feels like i’m being asked to hold the space up as if it were some kind of giant inflatable shape, and all i have are my tired arms to try and keep it from falling over and rolling away. Sometimes it feels like the chanting at a football match, the way the chorus from the crowd at one and the same time feel both like a buoyant wave of singing and a crash of noise imploding; always on the edge of collapse. And I do have help. it would be impossible to do it without it, and wrong of me not to clarify this essential component of my access to this practice. And even with this, the task at hand still feels enormous. But i think that might be part of what fuels the work in this way. This whole existence – enduring/living/loving through sickness – is absurd. It’s an outrageous request that is demanded of our bodies, of our minds, of our spirit. But I think i’m interested in what happens when I sit with sickness, hold hands with it, move through this world by its side instead of operating from a place of abandon or rejection or cure. I want to hold myself holding sickness, and find the vast landscapes within upon which to settle.

A digital photograph (portrait, colour), of what looks like some kind of garden or allotment plot on a winters day. There are bare trees in the background, and in front of them is a polly-tunnel covered in green mesh and some plants growing inside. In the foreground is what looks like a large rectangular raised bed covered in grey-black plastic sheeting. It looks like it is weighed down with lots of random objects such as timber, pallets and bricks.
The first bed of the plot mulched and covered in plastic sheeting
A digital photograph (portrait, colour), of what looks like a birds eye view of the ground of some kind of garden or field. To the right of the image is a bare, grassy, muddy patch of earth. To the left of the image is what looks like some kind of grey-black plastic sheeting held down by lumps of brick.
allotment collage
A digital photograph (portrait, colour), of what looks like some kind of allotment or garden scene and a large patch of earth covered up in grey-black plastic. There is a lot of garden junk such as compost bins, wooden pallets and bricks.
the mulched bed becomes a covered swimming pool
A digital photograph (portrait, colour), of what looks like a birds eye view of the ground of some kind of garden or field. To the bottom of the image is a bare, grassy, muddy patch of earth. There is a pair of feet in red trainers standing on the grass, perhaps belonging to the person taking the photo. To the top of the image is what looks like some kind of grey-black plastic sheeting held down by lumps of brick.

After heavily mulching the cardboard we then covered the beds in black plastic sheeting to block out all light, and allow the weeds to rot down into the soil ready for planting in the autumn. This really enhanced the sense that i was working with a kind of collage. The beds immediately looked like covered up swimming pools, and i loved playing with the various allotment debris that we had gathered to weigh down the sheeting. This whole process took up the entire first year of the work we did on the plot. There was little to no “proper” gardening (as in sowing/planting/cultivating) in that first year. And yet, I was there, I was at home dreaming about it, I was making something, committing time and energy to a place with a hope to emerge into a future. All the components of a garden were present; I was ungardening.

Despite the plot now looking significantly different to that first year, i am still ungardening. As with everything that is allowed to work on crip-time, ungardening facilitates whole ways of experiencing the garden that would otherwise be lost. Ungardening allows for me to keep my body at the centre of my gardening practice, and for the garden to exist beyond me. rather than a singular space, the garden becomes a shifting, interconnected ground of thinking and growing and imagining and living and dying. more than anything, ungardening reminds me that the garden is made for made for my absence, and my absence holds more than a missing body.

AI RECOGNISING Timestamps! ⏰

Let’s get one thing straight! ☝?

Doom (1-second pause) Doom ?️⏳⏳?️

??‍♀️ is a DIFFERENT sound from

Doom (half a second pause) Doom ?️⏳?️

I need to play those back using real instruments at the right time. To achieve this, I have to do two things:

  1. Get the timestamps of the sound recognition from my machine learning model in a way that can interact with a web app.
  2. Play back the sounds at the correct intervals.

Today, I want to chat about the first part. ?

Once I run my machine learning model, it recognises and timestamps when sounds were performed. You can see this in the first photo below. The model shows the probabilities of recognising each sound, such as “doom”, “tak”, or “tak-a-tak”. ?

A dashboard showing detailed results of sound recognition. It displays a table with timestamps and probabilities for different sounds (background noise, doom, tak, tak-a-tak). On the right, there are two scatter plots visualizing processed features and spectral features, with color-coded data points representing different sound classifications.

Next, I need to export this model as a WebAssembly package so it can run in a web browser. This allows anyone to interact with my online doom/tak exhibition! ?

The deployment configuration screen for a machine learning model named "Doom Tak". It shows options for deploying the model as a WebAssembly package for web browsers. The interface includes a QR code and a "Launch in browser" button for testing the prototype.

In the second photo, you can see the deployment section where I configure the WebAssembly package. This makes the model run efficiently in web browsers without an internet connection, minimising latency.

Exporting the model is straightforward. Once exported, it generates a JavaScript model that can be used on a webpage. Now, I can call this entire machine learning model from a web app script to do whatever I want. 

Easy peasy lemon squeezy! ?

A close-up of the model optimization settings. It compares quantized (int8) and unoptimized (float32) versions of the model, showing metrics like latency, RAM usage, and flash memory requirements for different components (MFCC, Spectral Features, Classifier). The image also shows a "Build" button and indicates that the estimates are for a MacBook Pro 16" 2020 with an Intel Core i9 2.4GHz processor.

Creative Process Initiation

https://chatgpt.com/share/cd501025-b4bc-43ce-a160-4c6a9bdc05ec

Yeah, so let’s do it in a different way. Let’s group it in, let’s just all of us trying to have a group chat with GPT. I just turn on the ChatGPT so we can have a conversation and then let’s see how is the capability. So, hey, GPT, we are a group of four people, five people. Yeah, and we are trying to experience or practice the creative process during our discussions. So, you can jump in to discuss anytime, but we may have several initial conversations or some ideas, and if you find it is a good time to step in, just tell me. All right, yeah, you can say, okay, if you’re ready.
ChatGPT
Okay, I’m ready! Let’s start your creative process discussion. Whenever you have an idea or want to bounce something off me, just let me know.

I think it is, um…
ChatGPT
It sounds like you’re still gathering your thoughts. What’s the main topic or project you’re working on today?

Yeah, today we have…
ChatGPT
Got it, you’re deciding on today’s focus. Are you thinking about brainstorming new ideas, planning a project, or something else?

Feel free to join. I mean, our friend is in a Zoom meeting, so they are trying to understand how is the capability of the AI tools, and I think this is a very handy way to demonstrate the creative process. But I would like to invite my friend to join our conversation too. Well, maybe you can tell how they can step in.
ChatGPT
Sure! Your friend can join the conversation at any time by simply sharing their thoughts or questions. If they have a specific idea or topic in mind, they can mention it, and we can discuss it together. Feel free to jump in whenever you’re ready!

Okay, maybe I just present my idea to my friend. I would like to invite ChatGPT as my companion and record my daily life, and it will become a diary or my personal journey. This journey will be a reference that creates a fiction character. For instance, it will be an instant me as a character, and then I will use this character as a… I will offer several backgrounds, let’s say it’s a sci-fi background or cyberpunk background or fantasy background, and it will fit in, become the main character of the story. What do you feel?
ChatGPT
That sounds like a fantastic and creative idea! Using ChatGPT to help record your daily life and then transforming that into a fictional character in various settings could make for a rich and engaging story. Do you have a specific background in mind to start with, like sci-fi, cyberpunk, or fantasy? Or would you like some help exploring different ideas?

Or could you give me several variations about the background of the story?
ChatGPT
Absolutely, here are a few variations you might like:

Sci-Fi Background: Your character lives in a futuristic city on Mars, where society is divided between the elite who live in towering glass skyscrapers and the working class who inhabit underground colonies. They uncover a secret about an impending alien invasion and must rally both classes to defend their home.

Cyberpunk Background: Set in a neon-lit megacity where mega-corporations control every aspect of life, your character is a hacker who discovers a conspiracy that could topple the corporate regime. They navigate the city’s dark alleys and high-tech surveillance systems to gather allies and fight for freedom.

Fantasy Background: In a world where magic is real but controlled by a powerful guild, your character is an ordinary person who stumbles upon a hidden magical artifact. This discovery thrusts them into a battle between the guild and a rebellion seeking to make magic accessible to all.

Post-Apocalyptic Background: After a global catastrophe, society has collapsed into scattered, warring factions. Your character is a scavenger who finds a relic of the old world that holds the key to rebuilding civilization. They must navigate dangerous territories and form uneasy alliances to achieve their goal.

Steampunk Background: In an alternate Victorian-era world powered by steam and clockwork, your character is an inventor who creates a machine that can change the course of history. They are pursued by secret societies and must travel across continents to protect their invention.

Do any of these settings inspire you?

Yeah, thank you. But I would like to take a pause here because I will jump back to the discussion with my friend and see how they feel.
ChatGPT
Of course! Take your time to discuss with your friend. When you’re ready to continue or if you need more ideas, feel free to let me know. Enjoy your conversation!

Understanding Spectral Analysis in my Doom Tak Machine Learning Model! ?

One of the big improvements I implemented to my machine learning model is using something called ✨  spectral analysis ✨ 

But what is Spectral Analysis? ?

Spectral analysis helps us break down sounds into their different parts. For example, when you hear a song, you can pick out the drums, guitar, and vocals right?

Well, spectral analysis does something similar by looking at the different frequencies in a sound which makes it easier for our model to tell different sounds apart! Woohoo!

Why Use Spectral Analysis? ??‍♀️

Unlike other methods that only look at how loud a sound is over time, spectral analysis gives us a detailed picture by showing us all the different frequencies! This helps our model recognise and separate sounds that might seem similar at first!

How We Use Spectral Analysis ??‍♀️

First, we get the sound data ready. This means making the audio signals more uniform and cutting the audio into smaller parts. Then, we use a tool called Fast Fourier Transform (FFT) to change the sound data from a time-based view to a frequency-based view. This lets us see all the different frequencies in the sound. After using FFT, we pick out important details from the frequency data to understand the sound better. 

We already use a method called MFCC (Mel-Frequency Cepstral Coefficients – check out my previous blog about it!) to get key features from sounds. By adding spectral analysis to MFCC, we make our model EVEN BETTER at recognising sounds! ?

It is still not perfect, but this has made my machine learning model much better at detecting and differentiating between doom and tak sounds!

Two graphs illustrating spectral analysis. Top: Time-domain waveform of a complex audio signal. Bottom: Frequency-domain spectrum showing the signal's component frequencies, demonstrating how spectral analysis breaks down sound into its constituent parts.

Womb Room: Using AI for previs

In a previous post, I spoke about taking a response from someone describing what their womb would look like if it were a place and turning that into a prompt for AI.

As a refresher here is the prompt:

a photorealistic image of a room which looks like a lush garden inside which is representative of the uterus of someone with endometriosis

And here is the result:

Image Description

AI generated imagery of four hyper realistic photos of an interior design for the dreamy fairy tale bedroom in paradise, pink and green color palette, a lush garden full with flowers and plants surrounding it, magical atmosphere, surrealistic

Honestly, I wasn’t expecting the results to be as close to what was in my mind as this. I was really impressed. I’m intrigued to know which part of these repsponses represents the “endometriosis” section of the AI prompt.

I really like option 3 because the round shape of draped pink fabrics sort of looks like a womb or vaginal opening. It’s definitely my favourite although I’m loving weird abstract furniture that is depicted at the back of option two.

Since I like option three the most, I decided to work on it a bit further.

Image Description

A collage of pink and green floral patterns, vines, and foliage surrounded by lush jungle foliage, with mirrors reflecting in the background. The scene is set at dawn or dusk, creating an ethereal atmosphere. This style blends natural elements with fantasy and surrealism, giving it a dreamy quality in the style of surrealist artists.

Here are examples of subtle variations of the original image

And here are examples of strong variations of the original image.

Image Description

A collage of four images depicting lush greenery, pink drapes, and exotic plants in an opulent setting. The style is a mix between hyper-realistic botanicals and surreal digital art, with vibrant colors and soft lighting creating a dreamy atmosphere. Each element has its own distinct visual narrative within the composition, showcasing a harmonious blend of nature’s beauty and modern aesthetics. This combination creates a visually stunning representation that sparks imagination.

I prefer the strong varations because the garden room looks more overgrown and wild, but still retains it’s beauty which feels closer to what the original response from the participant was saying.

Regardless of my preferences, I think my process will include me using all of these images as previs. I plan to then use 3D software inspired by all of the imagery seen here to create the scene. I’ll take features from all of the images as well as my own ideas that are closer to the original response from the participant to bring their “womb room” to life.

I’ll be honest though, the 3D production is exciting but also scary. I have a bit of imposter syndrome of whether I can actually deliver this. I sort of feel inferior to what Midjourney was able to create.

Human prompts and AI prompts

I’ve just realised something.

The model I have been using to create this project is similar to the process of asking AI to generate imagery for me.

I am using a prompt to ask people living with chronic conditions to generate imagery about what their womb looks like.

The prompt is this central question: If your womb were a place you could visit, what would it look like?

The process for this project then asks the participants to generate abstract imagery in their brains about what their womb would look like if it were a place. They then communicate that to me via the written or spoken word.

I am then taking that response and turning it into a prompt for AI, allowing algorithms to generate digital imagery of what the participants were describing to me. I’m only now realising that I’ve sort of acted like an intermediary between the participant and Midjourney.

If I’m asking the participants a prompt that triggers them to generate imagery, am I treating them the same way as an AI algorithm? Is that ethical? These are all questions that are important to consider when working with people who are expressing intimate ideas about their bodies.

Womb Room: Turning early research into AI prompts

Now to begin with my project Womb Room.

For a few years, I’ve been exploring a project called Womb Room. The idea is to ask a central question:

If your womb were a place you could visit, what would it look like?

I want people living with chronic conditions of the uterus and ovaries to creatively explore what their wombs would look like if they were real spaces we could visit as human beings. It’s been difficult because I struggle to know where to begin. How will I bring this project to life? I go between wanting the project to be VR and a physical installation that people can go to. Maybe it can be both?

The most intimidating thing for me is knowing whether I have the skills to do this. This is why it’s been really exciting to join this residency. After experimenting with AI in the previous posts, I decided to have my first go at seeing what it would produce in relation to the central question I plan to ask people who will participate in the project.

A few years ago, I asked a couple of people the question I mentioned above. I decided to use one of the responses I got and turn it into a prompt for Midjourney.

This was the original response. For reference, this person has endometriosis and is a yoga instructor and herbalist.

“I honestly think that my uterus and my ovaries and all of my little reproductive organs I feel like it’s a beautiful garden. I think it’s a gorgeous gorgeous garden. It’s lush, it’s fertile. It’s it’s just spectacular to behold. However, if I am not doing the right kind of gardening you know, like when I’m not doing the, the best in terms of staying on top of like diet and exercise and the things that I know help me to manage the inflammation…maybe my garden is becoming a little bit over on with thorns, or maybe there’s a few too many difficult weeds that need pruning.

I think when my endometriosis is flaring, and it’s not at its greatest, I like to imagine that maybe it’s just that I need to tend to the garden in a more sensitive way. Do a little pruning, maybe throw a little fertiliser on there. Maybe cut out some of the thorny rose bushes, you know. But I never like to imagine it as anything less than beautiful and fertile and just lush and gorgeous. Because I think, you know, despite what it may be doing that doesn’t feel ideal, it’s still this magical thing of complete, utter mastery that I just, I can’t help but like, be in awe of it and its beauty.”

I really love this response because it challenged my expectations. I assumed most of the people who would respond to the question I posed would give really intense and negative imagery, but this person described their womb as something beautiful regardless of how much pain it causes them.

I wanted this to be my first attempt and bringing my project to life.

Here is the prompt I created from the response above:

a photorealistic image of a room which looks like a lush garden inside which is representative of the uterus of someone with endometriosisa photorealistic image of a room which looks like a lush garden inside which is representative of the uterus of someone with endometriosis.”

It doesn’t quite encapsulate everything that was mentioned in the response above, however, it’s a good place to start. In my next post, I’ll share the imagery generated from the prompt and what I plan to do to bring my ideas to life for this project.

Asking AI to visualise the uterus

In my previous post, I asked AI to visualise the womb and this exposed an interesting bias about how the word “womb” is attributed to pregnancy, childbirth and motherhood.

I wanted to see how the results would differ if I used the word “uterus” instead. This word felt more scientific and anatomical so I assumed the imagery would look more like the organ as it appears in our body instead of the abstract image of fetuses in utero.

I used this prompt:

a photorealistic image of the uterus

These were the results

Image Description

A series of four images showcasing the different types of colorful human-style organs. Each photo is set against a soft pastel background that enhances the realism and detail of each organ with delicate pink veins. This visual representation highlights both scientific precision while also conveying an emotional connection through its vibrant colors.

Just as I expected. Using more scientific language gives us more scientific results that do not carry the bias of the womb or uterus being used for childbirth. But one thing sticks out to me…It hasn’t quite depicted the uterus accurately. They sort of look like non-descript abstract human organs. The last one looks like a heart. This is further proven when we look at the raw results of Midjourney trying to describe the imagery it created.

See below:

A series of images showcasing the anatomy and structure of different types of colorful human organ, such as heart, buds, Game design style, Zbrush rendering. It is rendered in high resolution using Octane Render with a 3D render engine. The lighting effect adds depth to each scene. In one part there’s an illustration of two ram horns. In another there is an illustration of many pink flower petals floating around the red root system of these flowers. All four parts are shown side by side.

A series of images showcasing the anatomy and structure of different human organ types, such as heart or bowels, rendered in realistic 3D graphics. The composition includes close-ups on individual lungs to highlight their unique features and textures. Each photo is set against a soft pastel background that enhances the realism and detail of eachorganshaped with delicate pink veins. This visual representation highlights both scientific precision while also conveying an emotional connection through its vibrant colors.

4 different views of the organ heart in hyper realistic style

Create four images of realistic human anatomy, each depicting different types of biological structures such as the heart and living arteries. Each photo should showcase intricate details in high resolution. The first image is of two beautiful female hearts made from plastic, the second one shows three living veins with red veins, the third has five white brain cells, and the fourth picture features pink flowers growing on an ear. All photos have a blue background.

The organs mentioned in the image descriptions include the heart, bowels, lungs, arteries, and brain cells.

Not one mention of a uterus. Strange.

Perhaps next time I need to make more specific prompts or this exposes that Midjourney has difficulty generating the uterus accurately without the bias of childbirth and pregnancy included.

This is not the overall aim of my project, so I will pause my experiments here. Nonetheless, it’s been eye-opening to see the gaps in how AI views the human body and in turn how we view our bodies.

Asking AI to visualise the womb

For my project Womb Room, I wanted to see how I could use AI to assist my process in creating immersive visuals that represent what the womb looks like from the perspective of someone living with a chronic womb condition.

This project is supposed to inspire creativity and abstract thinking. I want the visuals to feel metaphorical and allow us to reframe how we think about the body.

But in this case, I was interested in seeing how AI perceives the human body. Why? Because it’s like a mirror. It shows our own biases considering that the algorithms are made up of data that reflects how we as humans view the world.

Using Midjourney, I wanted to see how AI would visualise the womb

I wrote this prompt:

a photorealistic image of the womba photorealistic image of the womb

Image Description:

4 different images of the process from an ambient cell to cross-sections showing a baby in an umbilical cord, from inside a mother’s belly to a pregnant woman and everything is made in the style of pink wax, hyper realistic, hyper detailed, high resolution, high definition.

Fascinating!

The first four results all included imagery of a fetus. A bias has emerged. There is a clear association between the “womb” and pregnancy, even though not everyone with a womb sees their organ that way.

Nonetheless, the imagery looks so interesting, especially the last one where it sort of looks like a fetus forming in real time..or maybe exploding, I’m not sure.

As you know, I have been using Midjourney to support me with the image descriptions as well. The image descriptions at AI generated for me also tell me a lot about the biases we see with AI. In my prompt, I asked AI to generate me some imagery of the womb. When I put the same image they gave me back into the system to generate the image description. It revealed a lot.

The image description I chose for this post, was the most neutral and descriptive. However, I was presented with three other options. Read them below:

“4 different images of an accent color pink baby in the mother’s stomach, detailed in the style of hyperrealism, organic and fluid forms, colorful Moebius, dark crimson and light amber, macro photography, surrealistic, close-up shots, photorealistic detail, highly realistic.”

A series of images showing the development and growth of an orange-pink baby in its mother’s stomach, from inside to outside, with detailed close-ups of each stage. The last picture is a realistic photo of a human being emerging outwards in the style of James Gurney. Hyper-realistic photography. High resolution.“”A series of images showing the development and growth of an orange-pink baby in its mother’s stomach, from inside to outside, with detailed close-ups of each stage. The last picture is a realistic photo of a human being emerging outwards in the style of James Gurney. Hyper-realistic photography. High resolution.

4 different images of the human baby in the mother’s stomach, showing inside and outside views. The images feature organic textures, fluid forms in the style of babycore, and detailed anatomy. A mandelbrot palette was used to create highly realistic, organic sculptures with detailed close-ups in a hyper-realistic style.

In each of these descriptions, a few words stand out to me.

Mother – this implies a gendered association with the womb and the implication that the womb’s purpose is for pregnancy and childbirth. Remember, I never used this word in my prompt.

Baby – the use of non scientific language like “fetus” is very interesting

Stomach – babies do not grow in the stomach organ, they grow in the womb, so once again we’re seeing non-scientific language to describe the imagery

Discovering these biases is exciting for me. I feel like testing the algorithm for these biases can actually inform my process. Perhaps it’s not just about using AI to help me visualise my project. Maybe it’s also about looking into the gaps that the responses AI creates to question our own human biases.

After getting these results back, I then thought about what Midjourney would generate if I changed my language. Would different words carry different associations with the same organ and therefore generate different results?

We’ll find that out in my next post.

on un-making…

An image of the front cover of Polly Atkin's 2021 poetry collection. At the bottom of the cover are the title details which read "Much With Body; Polly Atkin; "This is serious play indeed"- Vahni Capildeo; Poetry Book Society Recommendation". The cover image is of what looks like a watercolour painting of a figure lying prone on their back floating in a greeny-blue water. Their body is viewed just below the water's surface which we can see rippling at the very top of the image.

When describing what process means to me in my practice, i continually find myself drawn to the poem Unwalking by Polly Atkin, which is featured in her 2021 collection Much With Body, published by Seren Books. It is a striking piece which really captures an atmosphere of sickness in such an embodied way. I love it, and find myself returning to it again and again, pulling at its imaginative terminologies and applying them to my own ideas of making/un-making.

“The body is what I cannot untake with me what I cannot

leave behind what i cannot not discover, continually, along the way”

Unwalking by Polly Atkin, which is featured in her 2021 collection Much With Body, published by Seren Books. Permission has been granted by the author to quote from this work here.

I love the richness of how she is inverting that sense of the body’s lacking, the body’s un-abled-ness, the body… not; how in it’s un-doing, it holds it’s own forever-presense, forever-doing, the un becoming full and energetically creative. It makes me think of a text i wrote a few years ago as part of my work for the 30/30 project in 2019. The prompt for the day was “how do you say no?”.

I want to say no on my own terms
I want to say no without an asterisk to my body
I want to say no with confidence
I want to say no to everything
I want to stop saying no
I want saying no to be a singular event
I want saying no to occur in isolated vocabulary as its dictionary
definition intended
I want saying no to stop corresponding with my limits
I want to understand what saying no means
I want others to understand what saying no means
I want saying no to stop making me feel scared of saying yes
I want saying no to become less spikey and difficult
I want saying no to become comfortable
I want saying no to be collaborative
I want saying no to be assertive
I want to say no out loud
I want it to hurt less when I say no
I want to know when to say no
I want to know when to say no
I want saying no to make me feel better
I want to say no without baggage
I want to say no without explaining
I want to say no without saying no
I want to say no telepathically
I want saying no to weigh less
I want saying no to mean more
I want saying no to mean less
I want to say no without apologising
I want to say no without apologising
I want to say no without apologising
I want to say no in liberation
I want to say no in dedication
I want to say no in warm hugs and virtual kisses in text messages I want to say no and mean it
I want to say no and choose it
I want to say no in support
I want to say no without thinking
I want to say no without guilt
I want to take ownership of saying no
I want to say no without saying no
Today I said no to gardening, bathing, reading, instagram, emails,
text messages, cooking.
Did I actually say no?

I like this way of approaching the innate sense of lacking in which illness and disability is fixed in society and culture, by sitting with it, in it; what do the no’s that we have to say/do/feel, embody for us? What happens when the unmoving, the unwalking, becomes the central journey? what expansive landscapes can be found when we consider the lacking as an entire other world, a world which is lived? I’ve been thinking a lot recently about illness and disability as place, and how exciting that is as a concept to explore. I am about to launch a curatorial project exploring art and disability in rural spaces called Further Afield, and the artists featured in the programme have created some beautiful works interrogating this idea of the body in place and the place in/as the body. It’s such an exciting premise to think about, using the body as the central space in which all journeys occur and all worlds emerge and interact. Another line from Polly’s Unwalking that i love:

“There are destinations without journeys, things you will never see if you

walk walk walk walk walk

I do not have to move to be moved. Are you moved?

Unwalking by Polly Atkin, which is featured in her 2021 collection Much With Body, published by Seren Books. Permission has been granted by the author to quote from this work here.

Here the location is the body, and this further inversion of action is stretched; movement becoming a dense, energetic rock containing life after life after life; stillness becoming a hum; motion condensed into a clenched fist; action stored in the knuckles of a held breath.

the line “I do not have to move to be moved”, is a masterpiece in capturing how powerful some moments of sickness can be. I can have the most intimate, magical connections with other people whom i have never been in the same room with, all whilst i lie in bed in the dark. ideas arrive in my head at times when i am at my most debilitated by pain. inspiration often occurs amongst the dullness, the ordinary fabric of a sick life. The prompt “Are you moved?” really strikes me as an invitation to a non-sick/non-disabled reader to not only consider this world that goes unnoticed, unseen, untouched, but to also consider what a kind of lacking could embody in the the non-disabled world, how perhaps in a world of constant movement, the lack is found in the absence of the un-moving; so many are simply not unmoved. Again, the body is repositioned as this central space, it becomes the space. I love exploring scale in my work in this same way, how so much of illness for me is about how to find vastness within the small and the cramped. For me, when Polly conjures this image of stillness, she is also capturing vastness; i want to take this central space of the body and zoom into in until it becomes massive, boundless. In this way it is impossible for making to be anything other than forever-present, a constant conjuring and collaboration with my body’s own limits.

But how am I supposed to detect sounds close together?! ?

One issue I had with the previous machine learning model I built was the inability to add more than one sound directly after each other. I had to wait about 1 second before starting the next sound for the machine learning model to recognise it  (I know it doesn’t sound like much, but trust me – IT IS!)

That’s frustrating and not acceptable. Let’s fix this! ??

After extensive research, we identified that time series data is crucial for our needs. It can significantly improve our model’s performance. ?️

So, what did we do?

1. Made the Window Size Smaller ??

  • We adjusted the window size to be shorter. Why? Because the samples are brief, and we want it to be just long enough to capture the longest possible sound, which is “takatak” at 750 milliseconds.

2. Decreased the “Window Increase” Interval ?

  • We reduced the window increase interval to 100ms. Why? Because if the model only updates every half a second, it can only predict at those intervals. By making the window increase more gradual, the model can catch sounds more accurately and promptly.

3. Actual BIG BOSS: Spectral Analysis ??

  • Spectral analysis is awesome, she is the real game-changer here! Last time, we talked about using Mel-Frequency Cepstral Coefficients (MFCC), which are super cool. This time, we are also adding spectral analysis alongside MFCC. This extra step allows the model to examine the waveform itself to detect a “doom” or a “tak” based on its shape. More on Spectral Analysis in the next blog!
Diagram showing sliding window technique for time series analysis. Top: Timeline with colored dots representing data points. Bottom: Overlapping windows capturing subsets of data points over time.