REST without AI

This week, Hong Kong was battered by heavy rain, and I took the chance to take a breather and recharge. The last few weeks have been manic. I’ve been working on three software projects at once. The non-stop pace had left me totally overloaded, so this rain break was just what I needed. I decided to visit my wife home village, a recharging place in the middle of the city’s forests. The air smelt of earth, and the quiet beauty of the landscape was a nice change. I could feel the tension of my tightly wound days begin to unravel, replaced by a sense of calm that felt long overdue. The mountains were like silent guards, making me think about the balance between creativity and rest.

View post >

The Question: 11AUG2025

The modern AI, most prominently represented by the Large Language Models (LLMs), prompts a fundamental question: Does it contain consciousness? To pose the question another way, the original wellspring of the AI concept is found where brain scientists, computer scientists and mathematicians began to explore if consciousness itself could be understood as a mathematical or computational process, as a system. This inquiry delves into whether today’s advanced automation is merely sophisticated mimicry or a genuine step towards the sentient machines envisioned by pioneers of the field. As Noam Chomsky openly criticises the GPT model as a fake intelligence, a copycat only. Or is there an even deeper question: is there any form of computing that can capture the differences between intelligence, awareness and consciousness? Or we simply don’t understand our kind. Those three words are just a game of our language, a misconception; they never exist.

View post >

Catastrophic Forgetting

The contemporary world is becoming increasingly influenced by artificial intelligence (AI) models, some of which are known as ‘world models’. While these concepts gained significant attention in 2025, their origins can be traced much further back. Jürgen Schmidhuber introduced the foundational architecture for planning and reinforcement learning (RL) involving interacting recurrent neural networks (RNNs)—including a controller and a world model—in 1990. In this framework, the world model serves as an internal, differentiable simulation of the environment, learning to predict the outcomes of the controller’s actions. By simulating these action-consequence chains internally, the agent can plan and optimise its decisions. This approach is now commonly used in video prediction and simulations within game engines, yet it remains closely related to cameras and image processing.

Despite the advancements made, a fundamental limitation first identified in 1990 continues to challenge the progress: the problem of instability and catastrophic forgetting. As the controller guides the agent into new areas of experience, there is a risk that the network will overwrite previously learned knowledge. This leads to fragile, non-lifelong learning, where new information erases older representations. Furthermore, Prof. Yann LeCun mentioned in his presentation ‘The Shape of AI to Come’ at the AI Action Summit’s Science Days at the Institut Polytechnique de Paris (IP Paris) that the volume of data that a large language model contains remains minimal compared to that of a four-year-old child’s data volume, as 1.1E14 bytes. One of the titles of his slides that has stayed in my mind is “Auto-Regressive Generative Models Suck!” In the area of reinforcement learning, the AI’s policies often remain static, unable to adapt to the unforeseen complexities of the real world — in other words, the AI does not learn after the training process. Recently, emerging paradigms like Liquid Neural Networks (Ramin Hasani) and Dynamic Deep Learning (Richard Sutton) attempt to address this rigidity. However, those approaches are still highly reliant on randomly selecting and cleaning a neural network inside, to maintain the learning dynamic and potentially improve real-time reaction and long-term learning. Nevertheless, they are still facing challenges in solving the problem of AI’s hallucinations. A fundamental paradigm shift for AI is needed in our time, but it takes time, and before that, this paradigm may already be overwhelming for both machines and humans.

View post >

Welcome

Photographed by YEUNG Tsz Ying


Hi, welcome to my online space. My name is Lazarus Chan, and I am a new media artist now based in Hong Kong.

My work crosses the intersections of science, technology, the humanities, and art, and I am particularly fascinated by themes of consciousness, time, and artificial intelligence. Much of my work is related to generative art and installations, posing questions from a humanistic and philosophical standpoint. I will create a new work, “Stochastic Camera (version 0.4) – weight of witness”, in August 2025, and you can follow my progress here.

This studio is an online space for me to continue my explorations and share my thoughts with you. I will write a series of posts, sharing my ongoing reflections on AI as it becomes more integrated into our world.

You are welcome to leave a comment and send me questions.

Lazarus Chan

More about the photo:
https://www.lazaruschan.com/artwork/golem-wander-in-crossroads

View post >