I am a Doctoral Researcher working on brain computer interface (BCI) for converting thought to speech. My research focus on recognition of mentally spoken speech captured using electroencephalogram (EEG). I use deep learning framework for recognition and analysis of speech from EEG signals.
To build deep learning models, I use Python with Tensorflow, Keras, and Pytorch libraries. For statistical analysis of EEG signals I use MNE-Python.
Focus of my work is achieving state-of-the-art recognition rate for interpreting imagined speech from EEG signals. This technology can provide a mean of communication to people suffering from neurological disorder such as locked-in syndrome.