Skip to main content
Kent Academic Repository

Deep Bottleneck Feature for Image Classification

Song, Yan, McLoughlin, Ian Vince, Dai, Li-Rong (2015) Deep Bottleneck Feature for Image Classification. In: ACM International Conference on Multimedia Information Retrieval (ICMR), June 2015, Shanghai. (doi:10.1145/2671188.2749314) (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:50264)

The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided.
Official URL:
https://doi.org/10.1145/2671188.2749314

Abstract

A key problem in spoken language identification (LID) is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF) for spoken LID, motivated by the success of Deep Neural Networks (DNN) in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV), using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09) show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

Item Type: Conference or workshop item (Speech)
DOI/Identification number: 10.1145/2671188.2749314
Subjects: T Technology
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
Depositing User: Ian McLoughlin
Date Deposited: 21 Aug 2015 10:12 UTC
Last Modified: 05 Nov 2024 10:35 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/50264 (The current URI for this page, for reference purposes)

University of Kent Author Information

McLoughlin, Ian Vince.

Creator's ORCID: https://orcid.org/0000-0001-7111-2008
CReDIT Contributor Roles:
  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.