Hertel, Lars, Phan, Huy, Mertins, Alfred (2016) Comparing Time and Frequency Domain for Audio Event Recognition Using Deep Learning. In: IEEE International Joint Conference on Neural Networks (IJCNN 2016). . pp. 3407-3411. IEEE, Vancouver, BC, Canada ISBN 978-1-5090-0619-9. (doi:10.1109/IJCNN.2016.7727635) (KAR id:72680)
PDF
Pre-print
Language: English |
|
Download this file (PDF/222kB) |
Preview |
Request a format suitable for use with assistive technology e.g. a screenreader | |
Official URL: https://doi.org/10.1109/IJCNN.2016.7727635 |
Abstract
Recognizing acoustic events is an intricate problem for a machine and an emerging field of research. Deep neural networks achieve convincing results and are currently the state-of-the-art approach for many tasks. One advantage is their implicit feature learning, opposite to an explicit feature extraction of the input signal. In this work, we analyzed whether more discriminative features can be learned from either the time-domain or the frequency-domain representation of the audio signal. For this purpose, we trained multiple deep networks with different architectures on the Freiburg-106 and ESC-10 datasets. Our results show that feature learning from the frequency domain is superior to the time domain. Moreover, additionally using convolution and pooling layers, to explore local structures of the audio signal, significantly improves the recognition performance and achieves state-of-the-art results.
Item Type: | Conference or workshop item (Proceeding) |
---|---|
DOI/Identification number: | 10.1109/IJCNN.2016.7727635 |
Divisions: | Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing |
Depositing User: | Huy Phan |
Date Deposited: | 25 Feb 2019 16:00 UTC |
Last Modified: | 05 Nov 2024 12:35 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/72680 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):