Performing with a Generative Electronic Music Controller

Dr Charles Martin - The Australian National University

web: charlesmartin.com.au     twitter/github: @cpmpercussion

Ngunnawal & Ngambri Country

Embodied Music Generation

  • note generation: generate “symbolic” music—notes (A, B, C, half-note, quaver, etc.). Abstract version of sounds created by some musical instruments.

  • embodied gesture generation: generate the movements a performer makes to operate a particular musical instrument.

this project explores embodied gesture generation in an improvised electronic music context!

Why do this?

  • Lots of musical instruments don’t use “notes”

  • e.g., turntable, mixer, modular synthesiser, effects pedal, etc

  • what does “intelligence” and “co-creation” look like in these instruments?

  • can we incorporate generative AI into a longer-term performance practice?

Generative AI System

  • gestural predictions are made by a Mixture Density Recurrent Neural Network (implemented using “Interactive Music Prediction System”—IMPS)

  • MDRNN: an extension of common LSTM/RNN designs to allow expressive predictions of multiple continuous variables.

  • MDRNN specs: 2 32-unit LSTM layers, 9-dimensional mixture density layer (8 knobs + time)

  • IMPS: A CLI Python program that provides MDRNN, data collection, training and interaction features.

  • communicates with music software over OSC (Open Sound Control)

  • in this case, MDRNN is configured for “call-and-response” interaction (or “continuation”)

Performances and Experiences

  • deployed in performance since 2019

  • so it works! and it’s practical!

  • but is it better than a random walk generator?

Influence and Co-Creation

  • can be steered (a little bit) by performer’s gestures

  • tends to continue adjusting knobs the performer last used

  • learns interesting behaviours from data (moving one vs multiple knobs, pauses, continuous changes)

  • good to for performer to have a different task to work on.

  • also important to allow performer to “just listen”

Small Data and Co-Adaptation

  • interactions from each performance are saved

  • some of these have been incorporated into training datasets

  • co-adaptive: system grows and changes along with the performer (yet to be studied rigorously)

Thanks!