web: charlesmartin.au mastodon: @[email protected]
note generation: generate “symbolic” music—notes (A, B, C, half-note, quaver, etc.). Abstract version of sounds created by some musical instruments.
embodied gesture generation: generate the movements a performer makes to operate a particular musical instrument.
this project explores embodied gesture generation in an improvised electronic music context!
lots of musical instruments don’t use “notes”
e.g., turntable, mixer, modular synthesiser, effects pedal, etc
what does “intelligence” and “co-creation” look like in these instruments?
can we incorporate generative ai into a longer-term performance practice?
gestural predictions are made by a Mixture Density Recurrent Neural Network (implemented using “Interactive Music Prediction System”—IMPS)
MDRNN: an extension of common LSTM/RNN designs to allow expressive predictions of multiple continuous variables.
MDRNN specs: 2 32-unit LSTM layers, 9-dimensional mixture density layer (8 knobs + time)
IMPS: A CLI Python program that provides MDRNN, data collection, training and interaction features.
communicates with music software over OSC (Open Sound Control)
in this case, MDRNN is configured for “call-and-response” interaction (or “continuation”)
deployed in performance since 2019
so it works! and it’s practical!
but is it better than a random walk generator?
can be steered (a little bit) by performer’s gestures
tends to continue adjusting knobs the performer last used
learns interesting behaviours from data (moving one vs multiple knobs, pauses, continuous changes)
good to for performer to have a different task to work on.
also important to allow performer to “just listen”
interactions from each performance are saved
some of these have been incorporated into training datasets
co-adaptive: system grows and changes along with the performer (yet to be studied rigorously)