Julian Rubisch

May 7, 2024

Personal Newsletter 2023/04

Hey everyone,

let's get the important news out the door first:

I know only 1% of my audience will be able to attend, but...

I'll be performing "Arecibo" at Moozak Multichannel @ Semmelweis Klinik on June 1, 2024.

More details to follow on my social media as soon as I get them! Here's a few notes on the piece itself:


In three movements, this piece explores the substance of the Arecibo message, broadcast to a nearby star cluster in 1974. Being derived from just two sonic renditions of said communication attempt, it seizes the essence of the included information about humanity and transforms it into a playful dance of acousmatic entities. 


Movement 1 is a more verbatim translation of the message into sound space. The used sample is folded back upon itself using parameters derived from analyzing it to resynthesize it. 


Movement 2 captures the moment as the signal is being dispersed into outer space. Reflections and interferences interweave until the broadcast reaches its destination in distorted form. 


Movement 3 speculates about how an answer from a space faring race could sound. The message is repurposed in joyful ways and knotted into complex patterns. The question whether the answer can be decoded again is left open. 

 

An Event I Enjoyed

"Spirits In Complexity" is a new research project based at mdw (University of Music and Performing Arts Vienna), and I attended the Kick-Off's symposium's concerts program (I couldn't make it to the talks, but will rewatch them later).

A memorable experience was "iteration 1" by Marco Döttlinger with Marino Formenti. Marco trained a neural network with Formenti's works and arbitrarily labeled snippets from "very bad" to "very good". 

In concert, Formenti was presented with this AI for the first time. The prompt for him was "play what you want but be very good", without going into detail what exactly is deemed "good" and "bad". Indeed, the only feedback for him and the audience was a visible gauge that oscillated between "very bad" and "very good", and the audience together with him had to discover the semantics. As the title suggests, it was only the first iteration in what is to be an ongoing exploration of this musical space. It was truly a simple, yet magnificent concept that gave rise to inspiration for other AI-feedbacked pieces I might realize in the future. Stay tuned!


A Book I Enjoyed Reading

I finally finished the "Three-Body Problem" trilogy with "Death's End", the last part. I must admit that the speed with which new characters, plots, and concepts were introduced somewhat disturbed me. It was an enjoyable read, but a couple of parts seemed very far-fetched and some loose ends remained unresolved.

There was, however, one narrative device which I found particularly enthralling. 🚨 Spoiler ahead! You should skip the next paragraph if you don't want to miss the surprise.



There is a section in the book where a human "spy" who was planted in the enemy alien's fleet is allowed to communicate back to humanity. But this communication is monitored, so he has to disguise any intelligence he wants to share. He does this by telling three fairy tales, each of them harmless on the surface, but humanity manages to decode them, at least in part, and somewhat put it to use. Now, what's coded in these fairy tales isn't as interesting as the subtext it carries: That a lot of information is always potentially hidden or lost in communication, and that an alert observer might be able to look beyond the surface area of meaning.



The different manifestations of communication and their ramifications when it comes to perceiving and putting the conveyed information to use have always excited me. I've thought and written about it a lot, and explored it in my music. In today's AI craze, however, another dimension is added: We communicate with large language models like ChatGPT in a conversational manner, but doesn't that also imply that there is always ambiguity, something "between the lines" at play? Or, put more bluntly, is it possible that an AI, inadvertently or not (?), gives us the metaphoric run-around? Or that we interpret things in those conversations that aren't there? Is ChatGPT a true, quasi-personified opposite? I think that artistic and humanities-based research projects like "Spirits in Complexity" partially try to answer these questions. I shall do my own inquiries too, though.


And that's it from me for April!

Tada,
Julian