63
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Enhancing Hybrid Eye Typing Interfaces with Word and Letter Prediction: A Comprehensive Evaluation

ORCID Icon, , ORCID Icon & ORCID Icon
Received 21 Jun 2023, Accepted 13 Dec 2023, Published online: 28 Dec 2023

References

  • Abdrabou, Y., Mostafa, M., Khamis, M., & Elmougy, A. (2019). Calibration-free text entry using smooth pursuit eye movements [Paper presentation]. Proceedings of the 11th ACM symposium on eye tracking research & applications, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3314111.3319838
  • Bafna, T., Bækgaard, P., & Paulin Hansen, J. P. (2021). Eyetell: Tablet-based calibration-free eye-typing using smooth-pursuit movements [Paper presentation]. ACM symposium on eye tracking research and applications, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3448018.3458015
  • Benligiray, B., Topal, C., & Akinlar, C. (2019). Slicetype: Fast gaze typing with a merging keyboard. Journal on Multimodal User Interfaces, 13(4), 321–334. https://doi.org/10.1007/s12193-018-0285-z
  • Best, D. S., & Duchowski, A. T. (2016). A rotary dial for gaze-based pin entry [Paper presentation]. Proceedings of the ninth biennial ACM symposium on eye tracking research & applications (pp. 69–76). Association for Computing Machinery. https://doi.org/10.1145/2857491.2857527
  • Curcio, C. A., Sloan, K. R., Kalina, R. E., & Hendrickson, A. E. (1990). Human photoreceptor topography. The Journal of Comparative Neurology, 292(4), 497–523. https://doi.org/10.1002/cne.902920402
  • Dondi, P., Porta, M., Donvito, A., & Volpe, G. (2022). A gaze-based interactive system to explore artwork imagery. Journal on Multimodal User Interfaces, 16(1), 55–67. https://doi.org/10.1007/s12193-021-00373-z
  • Eid, M. A., Giakoumidis, N., & El Saddik, A. (2016). A novel eye-gaze-controlled wheelchair system for navigating unknown environments: Case study with a person with ALS. IEEE Access. 4, 558–573. https://doi.org/10.1109/ACCESS.2016.2520093
  • Feit, A. M., Williams, S., Toledo, A., Paradiso, A., Kulkarni, H., Kane, S., & Morris, M. R. (2017). Toward everyday gaze input: Accuracy and precision of eye tracking and implications for design [Paper presentation]. Proceedings of the 2017 Chi Conference on Human Factors in Computing Systems, New York, NY, USA, (pp. 1118–1130). Association for Computing Machinery. https://doi.org/10.1145/3025453.3025599
  • Feng, W., Zou, J., Kurauchi, A., Morimoto, C. H., & Betke, M. (2021). Hgaze typing: Head-gesture assisted gaze typing [Paper presentation]. ACM Symposium on Eye Tracking Research and Applications, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3448017.3457379
  • Hansen, D. W., Skovsgaard, H. H. T., Hansen, J. P., & Møllenbach, E. (2008). Noise tolerant selection by gaze-controlled pan and zoom in 3D [Paper presentation]. Proceedings of the 2008 Symposium on Eye Tracking Research & Applications, New York, NY, USA (pp. 205–212). Association for Computing Machinery. https://doi.org/10.1145/1344471.1344521
  • Johansen, A. S., Hansen, J. P., Hansen, D. W., Itoh, K., & Mashino, S. (2003). Language technology in a predictive, restricted on-screen keyboard with dynamic layout for severely disabled people. In Proceedings of the 2003 eacl workshop on language modeling for text entry methods (pp. 59–66). Association for Computational Linguistics.
  • Kristensson, P. O., & Vertanen, K. (2012). The potential of dwell-free eye-typing for fast assistive gaze communication [Paper presentation]. Proceedings of the Symposium on Eye Tracking Research and Applications, New York, NY, USA (pp. 241–244). Association for Computing Machinery. https://doi.org/10.1145/2168556.2168605
  • Kumar, C., Hedeshy, R., MacKenzie, I. S., & Staab, S. (2020). Tagswipe: Touch assisted gaze swipe for text entry [Paper presentation]. Proceedings of the 2020 Chi Conference on Human Factors in Computing Systems, New York, NY, USA (pp. 1–12). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376317
  • Kurauchi, A., Feng, W., Joshi, A., Morimoto, C., & Betke, M. (2016). Eyeswipe: Dwell-free text entry using gaze paths [Paper presentation]. Proceedings of the 2016 Chi Conference on Human Factors in Computing Systems, New York, NY, USA (pp. 1952–1956). Association for Computing Machinery. https://doi.org/10.1145/2858036.2858335
  • Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and anovas. Frontiers in Psychology, 4. https://doi.org/10.3389/fpsyg.2013.00863
  • Lutz, O. H.-M., Venjakob, A. C., & Ruff, S. (2015). Smoovs: Towards calibration-free text entry by gaze using smooth pursuit movements. Journal of Eye Movement Research, 8(1), 2. https://doi.org/10.16910/jemr.8.1.2
  • MacKenzie, I. S., & Soukoreff, R. W. (2003). Phrase sets for evaluating text entry techniques [Paper presentation]. Chi ’03 Extended Abstracts on Human Factors in Computing Systems, New York, NY, USA (pp. 754–755). Association for Computing Machinery. https://doi.org/10.1145/765891.765971
  • MacKenzie, I. S., & Tanaka-Ishii, K. (2010). Text entry systems: Mobility, accessibility, universality. Elsevier.
  • MacKenzie, I. S., & Zhang, X. (2008). Eye typing using word and letter prediction and a fixation algorithm [Paper presentation]. Proceedings of the 2008 symposium on eye tracking research & applications, New York, NY, USA (pp. 55–58). Association for Computing Machinery. https://doi.org/10.1145/1344471.1344484
  • Majaranta, P., MacKenzie, I. S., Aula, A., & Räihä, K.-J. (2006). Effects of feedback and dwell time on eye typing speed and accuracy. Universal Access in the Information Society, 5(2), 199–208. https://doi.org/10.1007/s10209-006-0034-z
  • Majaranta, P., & Räihä, K.-J. (2002). Twenty years of eye typing: Systems and design issues [Paper presentation]. Proceedings of the 2002 symposium on eye tracking research & applications, New York, NY, USA (pp. 15–22) ACM. https://doi.org/10.1145/507072.507076
  • Mott, M. E., Williams, S., Wobbrock, J. O., & Morris, M. R. (2017). Improving dwell-based gaze typing with dynamic, cascading dwell times [Paper presentation]. Proceedings of the 2017 Chi Conference on Human Factors in Computing Systems, New York, NY, USA (pp. 2558–2570). Association for Computing Machinery. https://doi.org/10.1145/3025453.3025517
  • Orhan, U., Erdogmus, D., Roark, B., Oken, B., Purwar, S., Hild, K. E., … Fried-Oken, M. (2012). Improved accuracy using recursive bayesian estimation based language model fusion in erp-based BCI typing systems [Paper presentation]. 2012 annual international conference of the IEEE engineering in medicine and biology society (pp. 2497–2500). https://doi.org/10.1109/EMBC.2012.6346471
  • Park, K. (2017). Word prediction using convolutional neural networks. https://github.com/Kyubyong/word_prediction.
  • Penkar, A. M., Lutteroth, C., & Weber, G. (2012). Designing for the eye: Design parameters for dwell in gaze interaction [Paper presentation]. Proceedings of the 24th Australian computer-human interaction conference, New York, NY, USA (pp. 479–488) ACM. https://doi.org/10.1145/2414536.2414609
  • Pointer, J., & Hess, R. (1989). The contrast sensitivity gradient across the human visual field: With emphasis on the low spatial frequency range. Vision Research, 29(9), 1133–1151. https://doi.org/10.1016/0042-6989(89)90061-8
  • Porta, M., Dondi, P., Pianetta, A., & Cantoni, V. (2022). Speye: A calibration-free gaze-driven text entry technique based on smooth pursuit. IEEE Transactions on Human-Machine Systems, 52(2), 312–323. https://doi.org/10.1109/THMS.2021.3123202
  • Quinn, P., & Zhai, S. (2016). A cost-benefit study of text entry suggestion interaction [Paper presentation]. Proceedings of the 2016 chi conference on human factors in computing systems, New York, NY, USA (pp. 83–88) Association for Computing Machinery. https://doi.org/10.1145/2858036.2858305
  • Sidenmark, L., Mardanbegi, D., Gomez, A. R., Clarke, C., & Gellersen, H. (2020). Bimodalgaze: Seamlessly refined pointing with gaze and filtered gestural head movement [Paper presentation]. ACM symposium on eye tracking research and applications, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3379155.3391312
  • Speier, W., Chandravadia, N., Roberts, D., Pendekanti, S., & Pouratian, N. (2017). Online bci typing using language model classifiers by ALS patients in their homes. Brain Computer Interfaces, 4(1-2), 114–121. https://doi.org/10.1080/2326263X.2016.1252143
  • Sperling, G. (1960). The information available in brief visual presentations. Psychological Monographs: General and Applied, 74(11), 1–29. https://doi.org/10.1037/h0093759
  • Trnka, K., & McCoy, K. F. (2008). Evaluating word prediction: Framing keystroke savings. In Proceedings of the 46th annual meeting of the association for computational linguistics on human language technologies: Short papers (pp. 261–264). Association for Computational Linguistics. https://doi.org/10.5555/1557690.1557766
  • Urbina, M. H., & Huckauf, A. (2010). Alternatives to single character entry and dwell time selection on eye typing [Paper presentation]. Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, New York, NY, USA (pp. 315–322). Association for Computing Machinery. https://doi.org/10.1145/1743666.1743738
  • Ward, D. J., Blackwell, A. F., & MacKay, D. J. (2000). Dasher—a data entry interface using continuous gestures and language models. In Proceedings of the 13th annual ACM symposium on user interface software and technology (pp. 129–137). Association for Computing Machinery. https://doi.org/10.1145/354401.354427
  • Ward, D. J., & MacKay, D. J. (2002). Fast hands-free writing by gaze direction. Nature, 418(6900), 838–838. https://doi.org/10.1038/418838a
  • Wobbrock, J. O., Findlater, L., Gergle, D., & Higgins, J. J. (2011). The aligned rank transform for nonparametric factorial analyses using only anova procedures [Paper presentation]. Proceedings of the Sigchi Conference on Human Factors in Computing Systems, New York, NY, USA (pp. 143–146) Association for Computing Machinery. https://doi.org/10.1145/1978942.1978963
  • Wobbrock, J. O., Rubinstein, J., Sawyer, M. W., & Duchowski, A. T. (2008). Longitudinal evaluation of discrete consecutive gaze gestures for text entry [Paper presentation]. Proceedings of the 2008 Symposium on Eye Tracking Research & Applications, New York, NY, USA (pp. 11–18) Association for Computing Machinery. https://doi.org/10.1145/1344471.1344475
  • Zeng, Z., Liu, S., Cheng, H., Liu, H., Li, Y., Feng, Y., & Siebert, F. W. (2023). Gave: A webcam-based gaze vending interface using one-point calibration. Journal of Eye Movement Research, 16(1). https://doi.org/10.16910/jemr.16.1.2
  • Zeng, Z., Neuer, E. S., Roetting, M., & Siebert, F. W. (2022). A one-point calibration design for hybrid eye typing interface. International Journal of Human–Computer Interaction, 39(18), 3620–3633. https://doi.org/10.1080/10447318.2022.2101186
  • Zeng, Z., & Roetting, M. (2018). A text entry interface using smooth pursuit movements and language model [Paper presentation]. Proceedings of the 2018 ACM symposium on eye tracking research & applications, New York, NY, USA (pp. 69:1–69:2). ACM. https://doi.org/10.1145/3204493.3207413
  • Zeng, Z., Siebert, F. W., Venjakob, A. C., & Roetting, M. (2020). Calibration-free gaze interfaces based on linear smooth pursuit. Journal of Eye Movement Research, 13(1), 3. https://doi.org/10.16910/jemr.13.1.3
  • Zhang, X., Sugano, Y., & Bulling, A. (2019). Evaluation of appearance-based methods and implications for gaze-based applications [Paper presentation]. Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems (pp. 1–13). https://doi.org/10.1145/3290605.3300646

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.