Active Learning for Efficient Audio Annotation and Classification with a Large Amount of Unlabeled Data
Wang, Y., Mendez, A.E.M., Cartwright, M., Bello, J.P. Active Learning for Efficient Audio Annotation and Classification with a Large Amount of Unlabeled Data. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019.
Abstract
There are many sound classification problems that have target classes which are rare or unique to the context of the problem. For these problems, existing data sets are not sufficient and we must create new problem-specific datasets to train classification models. However, annotating a new dataset for every new problem is costly. Active learning could potentially reduce this annotation cost, but it has been understudied in the context of audio annotation. In this work, we investigate active learning to reduce the annotation cost of a sound classification dataset unique to a particular problem. We evaluate three certainty-based active learning query strategies and propose a new strategy: alternating confidence sampling. Using this strategy, we demonstrate reduced annotation costs when actively training models with both experts and non-experts, and we perform a qualitative analysis on 20k unlabeled recordings to show our approach results in a model that generalizes well to unseen data.