A Knowledge-based, Data-driven Method for Action-sound Mapping

F Visi, B Caramiaux, ER Miranda, M Mcloughlin

Research output: Contribution to journalConference proceedings published in a journalpeer-review

5 Downloads (Pure)

Abstract

This paper presents a knowledge-based, data-driven method for using data describing action-sound couplings collected from a group of people to generate multiple complex mappings between the performance movements of a musician and sound synthesis. This is done by using a database of multimodal motion data collected from multiple subjects coupled with sound synthesis parameters. A series of sound stimuli is synthesised using the sound engine that will be used in performance. Multimodal motion data is collected by asking each participant to listen to each sound stimulus and move as if they were producing the sound using a musical instrument they are given. Multimodal data is recorded during each performance, and paired with the synthesis parameters used for generating the sound stimulus. The dataset created using this method is then used to build a topological representation of the performance movements of the subjects. This representation is then used to interactively generate training data for machine learning algorithms, and define mappings for real-time performance. To better illustrate each step of the procedure, we describe an implementation involving clarinet, motion capture, wearable sensor armbands, and waveguide synthesis.
Original languageEnglish
Number of pages0
JournalProceedings of NIME 2017 : New Interfaces for Musical Expression
Volume0
Issue number0
Publication statusPublished - 15 May 2017
EventNIME 2017 : New Interfaces for Musical Expression - Copenhagen
Duration: 15 May 201719 May 2017

Fingerprint

Dive into the research topics of 'A Knowledge-based, Data-driven Method for Action-sound Mapping'. Together they form a unique fingerprint.

Cite this