Abstract
In recent years, machine learning has been widely adopted to automate the audio mixing process. Automatic mixing systems have been applied to various audio effects such as gain-adjustment, equalization, and reverberation. These systems can be controlled through visual interfaces, providing audio examples, using knobs, and semantic descriptors. Using semantic descriptors or textual information to control these systems is an effective way for artists to communicate their creative goals. In this paper, we explore the novel idea of using word embeddings to represent semantic descriptors. Word embeddings are generally obtained by training neural networks on large corpora of written text. These embeddings serve as the input layer of the neural network to create a translation from words to EQ settings. Using this technique, the machine learning model can also generate EQ settings for semantic descriptors that it has not seen before. We compare the EQ settings of humans with the predictions of the neural network to evaluate the quality of predictions. The results showed that the embedding layer enables the neural network to understand semantic descriptors. We observed that the models with embedding layers perform better than those without embedding layers, but still not as good as human labels.
Original language | English |
---|---|
Pages (from-to) | 753-763 |
Number of pages | 11 |
Journal | Journal of the Audio Engineering Society |
Volume | 70 |
Issue number | 9 |
Early online date | 12 Sept 2022 |
DOIs | |
Publication status | Published - 12 Sept 2022 |
Keywords
- Audio Mixing
- Automatic Mixing
- Equalization
- Semantic Word Vectors