You Only Hear Once: A YOLO-like Algorithm for Audio Segmentation and Sound Event Detection

Satvik Venkatesh*, David Moffat, Eduardo Reck Miranda

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

15 Downloads (Pure)

Abstract

Audio segmentation and sound event detection are crucial topics in machine listening that aim to detect acoustic classes and their respective boundaries. It is useful for audio-content analysis, speech recognition, audio-indexing, and music information retrieval. In recent years, most research articles adopt segmentation-by-classification. This technique divides audio into small frames and individually performs classification on these frames. In this paper, we present a novel approach called You Only Hear Once (YOHO), which is inspired by the YOLO algorithm popularly adopted in Computer Vision. We convert the detection of acoustic boundaries into a regression problem instead of frame-based classification. This is done by having separate output neurons to detect the presence of an audio class and predict its start and end points. The relative improvement for F-measure of YOHO, compared to the state-of-the-art Convolutional Recurrent Neural Network, ranged from 1% to 6% across multiple datasets for audio segmentation and sound event detection. As the output of YOHO is more end-to-end and has fewer neurons to predict, the speed of inference is at least 6 times faster than segmentation-by-classification. In addition, as this approach predicts acoustic boundaries directly, the post-processing and smoothing is about 7 times faster.
Original languageEnglish
Number of pages0
JournalApplied Sciences
Volume12
Issue number7
Early online date24 Mar 2022
DOIs
Publication statusPublished - 24 Mar 2022

Fingerprint

Dive into the research topics of 'You Only Hear Once: A YOLO-like Algorithm for Audio Segmentation and Sound Event Detection'. Together they form a unique fingerprint.

Cite this