Motor Imagery Classification is a pivotal task, facilitating direct communication between the human brain and external devices. Traditional methodologies often rely on manual feature engineering and basic classifiers, posing limitations in capturing intricate patterns within brain signals. Addressing this challenge, we propose a pioneering transformer-based framework tailored to motor imagery classification in BCIs. Our model capitalizes on the inherent self-attention mechanism of transformers to autonomously discern hierarchical representations from EEG sig-nals, thereby adeptly capturing both spatial and temporal dependencies. Through rigorous experimentation on publicly available EEG datasets such as the BCI Competition IV 2a dataset designed for motor imagery tasks, we showcase the efficacy of NeuroTransformer architecture with an accuracy of 86.2%, Sensitivity of 83.5% and Precision of 85.4%. Additionally, incorporating Principal Component Analysis with NeuroTransformer yields an accuracy of 86.7%, Sensitivity of 82.8%, and Precision of 86.1%. In the future, we focus on handling the problems associated with inter and intra-subject variability.
Publications
Research publications in VR, HCI, and intelligent systems.
In the realm of communication, individualized treatment for persons with disabilities remains paramount. Roughly 5% of the population experiences communication impairments rooted in health conditions affecting speech, language comprehension, auditory processing, reading, writing, or social interaction skills. This spectrum encompasses lifelong instances seen in cerebral palsy, acquired aphasia, amyotrophic lateral sclerosis, and traumatic brain injuries. Although current technology adeptly translates neural activity into speech for those who have lost their innate vocal capabilities due to neurological illnesses or injuries, it does not address congenital speech disabilities. Persons bearing communication disabilities often express being subjected to generalization. Thus, the imperative of supporting individuals with speech impairments emerges. At present, engineers have a distinctive opportunity to introduce innovative, cost-effective technological solutions to aid those with speech disabilities in effectively communicating with others. Electroencephalogram (EEG) signals, collected from the brain's scalp, play a pivotal role. These signals are commonly categorized based on their frequency, amplitude, and waveform characteristics. This paper centers on a significant endeavor: enhancing the quality of life for individuals with speech impairments. The primary focus involves deciphering select cognitive expressions of speech-impaired individuals and translating them into speech. Accomplishing this objective necessitates the fusion of Electroencephalogram data with advanced machine learning algorithms, facilitating the accurate classification of intended thoughts within specified time frames.