Abstract
A Brain–Computer Interface (BCI) provides a new communication channel for severely disabled people who have completely or partially lost control over muscular activity. It is questionable whether a BCI is the best choice for controlling a device if partial muscular activity still is available. For example, gaze-based interfaces can be utilized for people who are still able to control their eye movements. Such interfaces suffer from the lack of a natural degree of freedom for the selection command (e.g., a mouse click). One workaround for this problem is based on so-called dwell times, which easily leads to errors if the users do not pay close attention to where they are looking. We developed a multimodal interface combining eye movements and a BCI to a hybrid BCI, resulting in a robust and intuitive device for touchless interaction. This system especially is capable of dealing with different stimulus complexities.
Acknowledgments
Parts of this study were presented at the HCII 2009 conference in San Diego and have already been published in the proceedings of that conference. We gratefully thank Sandra Troesterer, Eva Wiese, Johann Habakuk Israel, and Matthias Roetting for their beneficial comments and support.