Mzurikwao, Deogratias, Williams Samuel, Oluwarotimi, Grace Asogbon, Mojisola, Li, Xiangxin, Li, Guanglin, Yeo, Woon-Hong, Efstratiou, Christos, Siang Ang, Chee (2019) A Channel Selection Approach Based on Convolutional Neural Network for Multi-channel EEG Motor Imagery Decoding. In: Artificial Intelligence and Knowledge Engineering (AIKE). 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). . pp. 195-202. IEEE, New York, USA ISBN 978-1-7281-1489-7. E-ISBN 978-1-7281-1488-0. (doi:10.1109/AIKE.2019.00042) (KAR id:76319)
PDF
Author's Accepted Manuscript
Language: English |
|
Download this file (PDF/1MB) |
![]() |
Request a format suitable for use with assistive technology e.g. a screenreader | |
Official URL: https://doi.org/10.1109/AIKE.2019.00042 |
Abstract
For many disabled people, brain computer interface (BCI) may be the only way to communicate with others and to control things around them. Using motor imagery paradigm, one can decode an individual's intention by using their brainwaves to help them interact with their environment without having to make any physical movement. For decades, machine learning models, trained on features extracted from acquired electroencephalogram (EEG) signals have been used to decode motor imagery activities. This method has several limitations and constraints especially during feature extraction. Large number of channels on the current EEG devices make them hard to use in real-life as they are bulky, uncomfortable to wear, and takes lot of time in preparation. In this paper, we introduce a technique to perform channel selection using convolutional neural network (CNN) and to decode multiple classes of motor imagery intentions from four participants who are amputees. A CNN model trained on EEG data of 64 channels achieved a mean classification accuracy of 99.7% with five classes. Channel selection based on weights extracted from the trained model has been performed with subsequent models trained on eight selected channels achieved a reasonable accuracy of 91.5%. Training the model in time domain and frequency domain was also compared, different window sizes were experimented to test the possibilities of realtime application. Our method of channel selection was then evaluated on a publicly available motor imagery EEG dataset.
Item Type: | Conference or workshop item (Proceeding) |
---|---|
DOI/Identification number: | 10.1109/AIKE.2019.00042 |
Uncontrolled keywords: | BCI, CNN, EEG, Feature maps, Motor imagery, Topographic maps |
Subjects: | T Technology |
Divisions: | Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Engineering and Digital Arts |
Depositing User: | Deogratias Mzurikwao |
Date Deposited: | 10 Sep 2019 13:10 UTC |
Last Modified: | 05 Nov 2024 12:40 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/76319 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):