WELCOME TO MY STUDIO

A 3D generated image of the sky and an ocean. Yellow text at the bottom says "On tha summer afternoon, all I had was the sky, the sea and my body in between."

Hey! I’m Jameisha, an artist-filmmaker and writer from South London.

As a disabled artist, I find inspiration in stories about how illness and disability intersect with themes of Black history, pop culture, identity and colonialism. Traditionally, I work with the moving image, however, I am currently expanding my practice to include digital art and immersive technology. My work seeks to explore how art can bridge empathy gaps and give an alternative viewpoint for how we talk about being ill. 

During the residency, I’ll be working on a project called WOMB ROOM. “If your womb were a place you could visit, what would it look like?” Through experimenting with digital art and AI, WOMB ROOM encourages people living with gynaecological conditions to re-imagine what the womb looks like. 

Feel free to look around my studio and make yourself at home. Leave any comments or questions about my work. I look forward to reading them. 

View post >

AI Supported Image Descriptions

My fellow resident, James Kong (be sure to check out his studio here) has written a post about how AI can aid accessibility, especially when it comes to image descriptions. I was really interested in how I could incorporate this into my own work.

As a disabled artist, I try my best to make my work as accessible as possible. This may take the form of subtitles, audio descriptions and image descriptions. It’s been interesting to see AI experts like James use AI to support the image description process.

I make disabled explainer content on social media and one of the more tedious tasks is image descriptions. Of course, it’s worth the effort, however, I often struggle with writing these descriptions because of the cognitive dysfunction that comes with my disability.

In my previous post where I spoke about ‘process’ and using Midjourney for the first time. I also used AI to support me with the image descriptions. Accuracy for image descriptions is important, so I didn’t rely on them completely however, I enjoyed how much easier it was for me. This is another example of AI being included in the creative process to increase accessibility for all audience members.

Below is an screenshot example of me using Chat GPT to write an image description for me. Interestingly enough, I’ll probably use Chapt GPT to write an image descriptions of the screenshot example.

Image Description: A screenshot image that shows shows a user interface for a text-based image description request. It includes an uploaded image and an AI-generated response. The image itself, seen in the screenshot, features a Black woman with long dreadlocks tied in a bun, pouring coffee from a French press into a white cup. She is in a cozy, well-equipped kitchen. The woman wears a colorful patterned top and bracelets, focusing intently on pouring the coffee. Steam rises from the cup and French press, indicating the coffee is freshly brewed. The scene is bathed in soft, natural light, creating a serene ambiance. The AI-generated description closely matches this depiction, detailing the woman’s appearance, the kitchen setting, and the overall atmosphere.

I think Chat GPT did an excellent job at writing the image description. If I were to use it in real life, I would make a few tweaks. Here’s what my edited version would look like.

“An AI generated image that depicts a Black woman with long, neatly styled locs tied up in a bun, pouring coffee from a French press into a white cup. She is in a cozy kitchen setting, adorned with modern appliances and a warm, homely atmosphere. The woman is wearing a colorful, patterned top and several bracelets. Her expression is focused as she carefully pours the coffee, with steam rising from both the cup and the French press, suggesting the coffee is freshly brewed. The lighting is soft and natural, casting a serene ambiance over the scene.”

View post >

On Process & Algorithms

[Image Description: A photorealistic AI generated image of a French press filled with steaming coffee sitting on the counter of an English traditional style kitchen.]

Before embarking on this residency, I have been thinking a lot about the idea of ‘process.’

During lockdown, repeat processes and set routines kept me grounded. I particularly think back to buying a coffee machine. You know, the ones where you buy the fancy-looking pods, pop them in the machine, press a button and watch your espresso be poured for you at the perfect temperature. There was something so unappealing about this. I was craving a deeper connection between the process and output of my morning coffee.

Now, in 2024, the coffee machine gathers dust while I use a simple French press instead. I enjoy grounding the beans to my liking, scooping the grounds into the press, filling it up with hot water, waiting a while, pushing the plunger down and finally pouring my coffee into a mug.

Perhaps this all sounds silly, but I learned a lot from reintroducing ‘process’ into my coffee routine. It’s for this reason that I’ve previously been a bit apprehensive about using artificial intelligence in my creative practice. The idea of it felt like I was once again removing the process of making art.

Now on this residency, I feel quite differently. I’m excited to explore the ways I can use AI within my process. I predict that I’ll use AI during the research and the early visualisation stages of my projects and then take that inspiration into the 3D production work that I’ve been developing over the last few months. However, I’m also happy for any of these predictions to change.

Anyway, the initial image on this post is one of my first attempts at using Midjourney, a generative AI tool. It did a really good job, but there are a few mistakes if you look more closely. The actual plunger used to press the coffee grounds to the bottom is nowhere to be seen. Also, you can see quite a lot of liquid coffee at the top. These mistakes are really exciting because it means it is my job as an artist to figure out solutions to these errors..or even exploit the errors for creative purposes.

[Image Description: A photorealistic AI generated image of a Black woman with dreadlocks pouring coffee from the French press into her cup in front of an old kitchen table.]

The next image I have included here is me trying to recreate my mornings with my French press using Midjourney. This is the prompt I used to generate the image:

a photorealistic image of a Black woman with dreadlocks pouring coffee from a French press into a mug in a traditional British kitchen

What I notice in these images (and the many others I tried to create) is that the algorithms do not understand how we as humans pour coffee from French presses into mugs. In the above photo, the person is pouring from a French press into a mug from a strange angle. They also hold a random unidentified object in their other hand. I tried to fix the issue with further prompts, but I think it will take more time for me to get it right.

The last thing I’ll add is that the image descriptions used in this post were supported by AI, but I’ll go into more detail on that in another post.

This is all part of the fun and I’m excited to see what will come of these AI experiments.

View post >