Langroudi, George, Palaniappan, Ramaswamy, McLoughlin, Ian Vince (2021) Auditory evoked potential detection during pure-tone audiometry. In: 10th International IEEE/EMBS Conference on Neural Engineering (NER). . IEEE (doi:10.1109/NER49283.2021.9441417) (KAR id:91394)
PDF
Author's Accepted Manuscript
Language: English |
|
Download this file (PDF/235kB) |
Preview |
Request a format suitable for use with assistive technology e.g. a screenreader | |
Official URL: https://doi.org/10.1109/NER49283.2021.9441417 |
Abstract
Modern audiometry is largely a behavioural task, with the pure-tone audiogram (PTA) being the gold standard for evaluating frequency-specific hearing thresholds in adults. The nature of behavioural audiometry makes estimating accurate hearing thresholds difficult in infants and people with disabilities, where following instructions or interacting with the test may be difficult or impossible. We propose a method in which Auditory Evoked Potentials (AEPs) are used as an alternative to behavioural audiometry for detecting frequency-specific thresholds. Specifically, P300 responses elicited by the tones of a PTA are automatically detected from electroencephalogram (EEG) data, to evaluate hearing acuity. To assess the effectiveness of this method, we created a dataset of EEG recordings from participants presented with a series of pure tones at 6 different frequencies with steadily decreasing volumes, during a PTA test. This dataset was used to train a support vector machine (SVM) to identify when a participant was played a tone, whether they perceived it or not using their EEG. Results demonstrate that detecting hearing events can be very accurate for participants for whom the classifier has been trained apriori. However, accuracy drops significantly for unseen participants - when the classifier has not been trained on any prior data from a given participant before classifying their EEG. However, by establishing that AEP response-based audiometry is viable for detecting tones, future work will explore the ability of more powerful deep neural networks to accurately estimate for unseen participants.
Item Type: | Conference or workshop item (Proceeding) |
---|---|
DOI/Identification number: | 10.1109/NER49283.2021.9441417 |
Divisions: | Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing |
Depositing User: | Palaniappan Ramaswamy |
Date Deposited: | 07 Nov 2021 12:02 UTC |
Last Modified: | 27 Aug 2022 22:09 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/91394 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):