In mid June this year, my colleagues and I were able to host a little audio/visual electroacoustic festival again after a long lockdown, here is a picture:
While listening to the pieces presented there, the old dialectic of gesture vs process in music started revolving in my head again.
Because while I was hearing a lot of beautiful, intricate, meticulously crafted processes, I was missing a sense of gesture, an overarching „story arc“ confining the artistic narrative at two ends - quite the opposite of a process which repeats indefinitely - at large.
Story vs System
Let me contrast this anecdote with another one. Two years or so ago, on a whim I made a „creative personality“ test by Adobe that was sent to me. I only remember one thing about it, that it contained a question „are you a story or a system type?“
This innocent question hit me like a sucker punch. This, exactly this, is what I was struggling with my entire artistic life, and sort of defines my engineering approach. I am, for better or worse, a system guy. I see connections where others don’t and theorize about, then implement them. On the other hand I went to study electroacoustic music because I needed guidance on how to tell a story with sound.
So in a way, while sitting in that audience listening to beautiful textures, I met my former self. And I began to ask myself if there is a way to synthesize those two dramaturgical poles? In other words: How do we conceptualize gestural processes? Or procedural gestures?
Coupling Leads to Rigidity
At the moment of writing this, I’m mostly entrenched in my software engineering adventures, so let me give you an example from that field: The original definition of Object Orientated Programming by Dr. Alan Kay (brought to me in Avdi Grimm’s excellent course „Master the Object-Oriented Mindset in Ruby and Rails") conceived of a system of entities communicating with each other via messages - not unlike a biological organism made up of cells that is capable of higher levels of organization.
In software engineering, we speak of a pair of extremes called cohesion and coupling. The more one cell/object reaches into the internals of another object, the more tightly coupled and rigid the whole structure becomes. Conversely, cells should be cohesive, i.e. concern themselves first and foremost with their inner state, and only reach out to notify other cells of their state on special occasions. If you follow these two principles, a system stays flexible, and programming it to perform gestural behavior (not unfittingly called orchestration) becomes possible.
Autonomous Agents
While pondering these coherences, something else came to mind: In „The Nature of Code“, Daniel Shiffman gives an expressive and entertaining introduction to Autonomous Agents, also known as flocking behavior. Let’s quickly recapitulate what defines an autonomous agent:
An autonomous agent has a limited ability to perceive environment.
An autonomous agent processes the information from its environment and calculates an action.
That sounds astoundingly familiar. Objects in Object Oriented Programming behave very similarly. They encapsulate their own inner state and act on signals they receive from their environment (other objects messaging them, for example).
Neuroscience
Another manifestation of this principle can be observed in the human brain. In a neurobiological rendering of the small world concept, neurons seem to be organized in similar “communities”, i.e. neighboring cells being connected to a “principal neurons”, which themselves form communities around a “principal principal neuron” again, etc [1]. This structure seems to be superior in efficiency as compared to a matrix where every neuron is connected to every other one. Somehow, this fractal organization pattern of communication pathways seems to be an almost universal rule of nature.
Loosely Coupled Musical Systems
How then, you will ask yourself, do we apply all this knowledge to music composition and performance? Well, for one I’ve noticed that many electronic musicians design intricate systems of sound generators, signal processors, sequencers etc. - both analog and digital - to create elaborate processes, i.e. musical output without defined start and end. In effect, often times these are the equivalents of tightly coupled software systems. They define a certain process so strictly that there isn’t much room for gestural expression within it anymore. The process is the system.
I conjecture that if you design your electronic music composition or performance setup
as a network of autonomous agents („cells“)
who are capable of responding to outside stimuli
and are only signaled by other cells of certain events, but do not allow interference with their inner workings (in software engineering we call this encapsulation, and by extension the „tell, don’t ask“ principle)
only on well-defined communication paths („connections“)
you will arrive at a system allowing for maximal gestural expression while still being that - a system capturing a certain process of sound generation. In other words, design your systems in such a way that you only give directions but don’t have to interfere with the respective subcomponents („turning knobs“) anymore. Upon arrival of a sound or control signal, each subcomponent should be able to act autonomously and deterministically (which doesn’t exclude randomness by the way). More than that, ideally they act as black boxes that shouldn’t even afford to you any means of rewiring their internals - then you’ll be able to fully concentrate at designing, rehearsing and performing elaborate musical, but still procedural, gestures.