ABSTRACT
There is increasing interest in developing intuitive brain-computer interfaces (BCIs) to differentiate intuitive mental tasks such as imagined speech. Both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have been used for this purpose. However, the classification accuracy and number of commands in such BCIs have been limited. The use of multi-modal BCIs to address these issues has been proposed for some common BCI tasks, but not for imagined speech. Here, we propose a multi-class hybrid fNIRS-EEG BCI based on imagined speech. Eleven participants performed multiple iterations of three tasks: mentally repeating ‘yes’ or ‘no’ for 15 s or an equivalent duration of unconstrained rest. We achieved an average ternary classification accuracy of 70.45 ± 19.19% which is significantly better than that attained with each modality alone (p < 0.05). Our findings suggest that concurrent measurements of EEG and fNIRS can improve classification accuracy of BCIs based on imagined speech.
Disclosure statement
No potential conflict of interest was reported by the authors.