Artificially synthesising data for audio classification and segmentation to improve speech and music detection in radio broadcast

Research output: Chapter in Book/Report/Conference proceedingConference proceedings published in a bookpeer-review

Abstract

Segmenting audio into homogeneous sections such as music and speech helps us understand the content of audio. It is useful as a preprocessing step to index, store, and modify audio recordings, radio broadcasts and TV programmes. Deep learning models for segmentation are generally trained on copyrighted material, which cannot be shared. Annotating these datasets is time-consuming and expensive and therefore, it significantly slows down research progress. In this study, we present a novel procedure that artificially synthesises data that resembles radio signals. We replicate the workflow of a radio DJ in mixing audio and investigate parameters like fade curves and audio ducking. We trained a Convolutional Recurrent Neural Network (CRNN) on this synthesised data and outperformed state-of-the-art algorithms for music-speech detection. This paper demonstrates the data synthesis procedure as a highly effective technique to generate large datasets to train deep neural networks for audio segmentation.

Original languageEnglish
Title of host publication2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages636-640
Number of pages5
ISBN (Electronic)9781728176055
DOIs
Publication statusPublished - 2021
Event2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada
Duration: 6 Jun 202111 Jun 2021

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2021-June
ISSN (Print)1520-6149

Conference

Conference2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021
Country/TerritoryCanada
CityVirtual, Toronto
Period6/06/2111/06/21

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Keywords

  • Audio Classification
  • Audio Segmentation
  • Deep Learning
  • Music-speech Detection
  • Training Set Synthesis

Fingerprint

Dive into the research topics of 'Artificially synthesising data for audio classification and segmentation to improve speech and music detection in radio broadcast'. Together they form a unique fingerprint.

Cite this