Volume 17, Issue 2 (9-2020)                   JSDP 2020, 17(2): 120-113 | Back to browse issues page


XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Asghari Bejestani M R, Mohammadkhani G R, Gorgin S, Nafisi V R, Farahani G R. Classification of EEG Signals for Discrimination of Two Imagined Words. JSDP 2020; 17 (2) :120-113
URL: http://jsdp.rcisp.ac.ir/article-1-843-en.html
Iranian Research Organization for Science and Technology (IROST)
Abstract:   (2699 Views)

In this study, a Brain-Computer Interface (BCI) in Silent-Talk application was implemented. The goal was an electroencephalograph (EEG) classifier for three different classes including two imagined words (Man and Red) and the silence. During the experiment, subjects were requested to silently repeat one of the two words or do nothing in a pre-selected random order. EEG signals were recorded by a 14 channel EMOTIV wireless headset. Two combinations of features and classifiers were used: Discrete Wavelet Transform (DWT) features with Support Vector Machine (SVM) classifier and Principle Component Analysis (PCA) features with a Minimum-Distance classifier. Both combinations were capable of discriminating between the three classes much better than the chance level (33.3%), none of them was reliable and accurate enough for a real application though. The first method (DWT+SVM) showed better results. In this case, feature set was D2, D3, D4 and A4 coefficients of 4-level DWT decomposition of the EEG signals, roughly corresponding to major frequency bands (Delta, Theta, Alpha and Beta) of these signals. Three binary SVM machines were used. Each machine was trained to classify between two of the three classes, namely Man/Red, Man/Silence or Red/Silence. Majority Selection Rule was used to determine final class. Once two of these classifiers presented the true class, a win (correct classification) was counted, otherwise a loss (false classification) was considered. Finally, Monte-Carlo Cross Validation showed an overall performance of about 56.8% correct classification which is comparable with the results reported for similar experiments.

Full-Text [PDF 2403 kb]   (621 Downloads)    
Type of Study: Research | Subject: Paper
Received: 2018/05/6 | Accepted: 2020/05/13 | Published: 2020/09/14 | ePublished: 2020/09/14

References
1. [1] N. Birbaumer and L. G. Cohen, "Brain-computer interfaces: communication and resto-ration of movement in paralysis", The Journal of physiology, vol. 579, no. 3, pp. 621-636, 2007. [DOI:10.1113/jphysiol.2006.125633] [PMID] [PMCID]
2. [2] Ivan S. Kotchetkov, Brian Y. Hwang and et. al., "Brain-computer Interfaces: military, neuro-surgical and ethical prespective", Neurosurg Focus, vol. 28, no. 5, pp. 1-6, 2010. [DOI:10.3171/2010.2.FOCUS1027] [PMID]
3. [3] M. A. Lebedev and M. A. L. Nicolelis, "Brain-Machine interfaces: past, present and future", TRENDS in Neurosciences, vol. 29, no. 9, pp. 536-546, 2006. [DOI:10.1016/j.tins.2006.07.004] [PMID]
4. [4] Defense Advanced Research Projects Agency: Department of Defense Fiscal Year 2010 Budget Estimates Washington, DC, Department of Defense, 2009
5. [5] F. Nijboer, E. W. Sellers, J. Mellinger, M. A. Jordan, T. Matuz, A. Furdea, S. Halder, U. Mochty, D. J. Krusienski, and T. M. Vaughan, "A P300-based brain-computer interface for people with amyotrophic lateral sclerosis", Clinical neurophysiology, vol. 119, no. 8, pp. 1909-1916, 2008. [DOI:10.1016/j.clinph.2008.03.034] [PMID] [PMCID]
6. [6] E. Donchin and Y. Arbel, "P300 based brain computer interfaces: a progress report", Foundations of Augmented Cognition, Neuroer-gonomics and Operational Neuroscience, pp. 724-731, 2009. [DOI:10.1007/978-3-642-02812-0_82]
7. [7] L. A.Farwell, E.Donchin, "Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials", Electroenceph clin Neurophysiol, Vol. 70, pp. 510-523, 1988. [DOI:10.1016/0013-4694(88)90149-6]
8. [8] S. Iqbal , Y.U. Khan , O. Farooq ," EEG based classification of imagined vowel sounds", 2nd International Conference on Computing for Sustainable Global Development (INDIACom),pp. 1591-1594, IEEE 2015.
9. [9] K. Brigham, B.V.K.V Kumar, " Imagined Speech Classification with EEG Signals for Silent Communication: A Preliminary Investigation into Synthetic Telepathy", 4th International Con-ference on Bioinformatics and Biomedical Engineering (iCBBE), pp. 1-4, IEEE 2010. [DOI:10.1109/ICBBE.2010.5515807]
10. [10] R. Kamalakkannan, R. Rajkumar, R. M. Madan, D. S. Shenbaga, "Imagined Speech Classi-fication using EEG", Advances in Biomedical Science and Engineering, Vol. 1, No. 2, pp.20-32, 2014.
11. [11] X. Chi, J. B. Hagedorna, D. Schoonovera, and M. D'Zmuraa, "EEG-based discrimination of imagined speech phonemes", International Journal of Bioelectromagnetism, vol. 13, no. 4, 2011.
12. [12] T. Kim, J. Lee, H. Choi, H. Lee et al., "Meaning based covert speech classification for brain-computer interface based on electroencephalo-graphy", 6th International IEEE/EMBS Con-ference on Neural Engineering (NER), pp. 53-56, 2013. [DOI:10.1109/NER.2013.6695869]
13. [13] K. Brigham , B.V.K.V. Kumar, "Subject identification from electroencephalogram (EEG) signals during imagined speech", Fourth IEEE International Conference on Biometrics: Theory Applications and Systems (BTAS) ,pp. 1-8, 2010. [DOI:10.1109/BTAS.2010.5634515]
14. [14] M. D'Zmura, S. Deng, T. Lappas, S. Thorpe, and R. Srinivasan, "Toward EEG sensing of imagined speech", Human-Computer Inter-action New Trends, pp. 40-48, 2009. [DOI:10.1007/978-3-642-02574-7_5]
15. [15] S. Deng, R. Srinivasan, T. Lappas, and M. D'Zmura, "EEG classification of imagined syllable rhythm using Hilbert spectrum methods", Journal of neural engineering, vol. 7, no. 4,pp. 046006, 2010. [DOI:10.1088/1741-2560/7/4/046006] [PMID]
16. [16] K. Yaser Arafat, S. S. Kanade, "Imagined Speech EEG Signal Processing For Brain Computer Interface", International Journal of Application or Innovation in Engineering & Management (IJAIEM), Vol. 3, No. 7, pp. 123, 2014.
17. [17] A. Shahbahrami, K. Nadjafi, T.Nadjafi, "Different Application Fields of Brain Signal Processing", Quarterly Journal of Signal and Data Processing(JSDP), Vol.13, No.3, pp129-154, 2016. [DOI:10.18869/acadpub.jsdp.13.3.129]
18. [18] C. S. DaSalla, H. Kambara, M. Sato, and Y. Koike, "Single-trial classification of vowel speech imagery using common spatial patterns", Neural Networks, vol. 22, no. 9, pp. 1334-1339, 2009. [DOI:10.1016/j.neunet.2009.05.008] [PMID]

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2015 All Rights Reserved | Signal and Data Processing