Week 3 / Day 5 : glitching and last minute ditching

20.6.2025



Today we set up the project space and worked there. The PD patch kept crashing and sometimes on all 3 computers at once. After working on it all day, we decided to ditch the sensors and ‘go rogue’… into the wilderness of the AI model’s voices without being able to control or affect the voice output with controls but only through the subtleties of listening. It’s a challenge the day before the sharing, but after working with the set-up for 3 weeks I felt confident I could handle it.

The advantage is that without the sensors – which were like a kind of hand-cuff or chastity belt! – I feel more free to move and I can respond more spontaneously. We’re going to rely on analogue movement feedback especially from the bells that pick up the vibrations of a body in motion. I remember the camel in Khajuraho with its bell: whenever I heard that beautiful tinkling, I would look up to see the camel saunter past. Bells are definitely a good feedback, to go against the straight line and to saunter in search of grace.

I hope that in the future we can use different kinds of microphones that pick up unexpected noises and movement from the audience. Maybe a geophone / seismic mic would pick up micromovement from the audience, that they are unaware of, and make them become aware of it. Activity like fidgeting, whispering, walking about or even shifting as they start to fall asleep. I think this AI model has a lesson to teach about conscious awareness as a form of presence, versus self-consciousness as a source of anxiety…