Skip to main content
Kent Academic Repository

Audio Scene Classification with Deep Recurrent Neural Networks

Phan, Huy, Koch, Philipp, Katzberg, Fabrice, Maass, Marco, Mazur, Radoslaw, Mertins, Alfred (2017) Audio Scene Classification with Deep Recurrent Neural Networks. In: Proceedings of Interspeech. . pp. 3043-3047. International Speech Communication Association, Stockholm, Sweden (doi:10.21437/Interspeech.2017-101) (KAR id:72671)

Abstract

We introduce in this work an efficient approach for audio scene classification using deep recurrent neural networks. An audio scene is firstly transformed into a sequence of high-level label tree embedding feature vectors. The vector sequence is then divided into multiple subsequences on which a deep GRU-based recurrent neural network is trained for sequence-to-label classification. The global predicted label for the entire sequence is finally obtained via aggregation of subsequence classification outputs. We will show that our approach obtains an F1-score of $97.7\%$ on the LITIS Rouen dataset, which is the largest dataset publicly available for the task. Compared to the best previously reported result on the dataset, our approach is able to reduce the relative classification error by 35.3%.

Item Type: Conference or workshop item (Proceeding)
DOI/Identification number: 10.21437/Interspeech.2017-101
Uncontrolled keywords: audio scene classification, deep neural networks, recurrent neural networks, GRU
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
Depositing User: Huy Phan
Date Deposited: 25 Feb 2019 15:36 UTC
Last Modified: 09 Dec 2022 01:58 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/72671 (The current URI for this page, for reference purposes)

University of Kent Author Information

  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.