web: charlesmartin.au mastodon: @[email protected]
Intelligent Musical Instruments become a normal part of musical performance and production.
Assist professional musicians & composers
Engage novice musicians & students
Reveal creative interaction with intelligent systems
Create new kinds of music!
Music Systems | Data |
---|---|
Score / Notation | Symbolic Music, Image |
Digital Instruments | MIDI |
Recording & Production | Digital Audio |
New Musical Interfaces | Gestural and Sensor Data |
Show Control | Video, Audio, Lighting, Control Signals |
Time per prediction (ms) with different sizes of LSTM layers.
Time per prediction (ms) with different MDN output dimensions. (64 LSTM units)
12K sample dataset (15 minutes of performance)
Takeaway: Smallest model best for small datasets. Don’t bother training for too long.
100K sample dataset (120 minutes of performance)
Takeaway: 64- and 128-unit model still best!
Takeaway: Make Gaussians less diverse, make categorical more diverse.
12 participants
two independent factors: model and feedback
model: human, synthetic, noise
feedback: motor on, motor off
Change of ML model had significant effect: Q2, Q4, Q5, Q6, Q7
human model most “related”, noise was least
human model most “musically creative”
human model easiest to “influence”
noise model not rated badly!
Participants generally preferred human or synth, but not always!
Human and synth: more range of performance lengths with motor on.
Noise: more range with motor off.
Studied self-contained intelligent instrument in genuine performance.
Physical representation could be polarising.
Performers work hard to understand and influence ML model.
Constrained, intelligent instrument can produce a compelling experience.
Emulate or enhance ensemble experience
Engage in call-and-response improvisation
Model a performer’s personal style
Modify/improve performance actions in place
Are ML models practical for musical prediction?
Are intelligent instruments useful to musicians?
What happens when musicians and instrument co-adapt?
Can a musical practice be represented as a dataset?
What does a intelligent instrument album / concert sound like?