Abstract
This paper introduces a new brain-computer interface (BCI) system that uses electroencephalogram (EEG) information to steer generative rules in order to compose and perform music. It starts by noting the various attempts at the design of BCI systems, including systems for music. Then it presents a short technical introduction to EEG sensing and analysis. Next, it introduces the generative music component of the system, which employs our own adapted version of a machine-learning technique based on ATNs (Augmented Transition Networks) for the computer-replication of musical styles. The system constantly monitors the EEG of the subject and activates generative rules associated with the activity of different frequency bands of the spectrum of the EEG signal. The system also measures the complexity of the EEG signal in order to modulate the tempo (beat) and dynamics (loudness) of the performance. Subjects can be trained to select between different classes of generative rules to produce original pieces of music.
Original language | English |
---|---|
Pages (from-to) | 119-125 |
Number of pages | 0 |
Journal | International Journal on Disability and Human Development |
Volume | 5 |
Issue number | 2 |
Publication status | Published - 20 Nov 2006 |
Keywords
- Assistive music technology
- Brain-Computer Interface
- generative music systems
- bio-signal music controller