620
Views
15
CrossRef citations to date
0
Altmetric
Making technology work

Effects of auditory, haptic and visual feedback on performing gestures by gaze or by hand

, , &
Pages 1044-1062 | Received 02 Jan 2015, Accepted 20 May 2016, Published online: 16 Jun 2016
 

ABSTRACT

Modern interaction techniques like non-intrusive gestures provide means for interacting with distant displays and smart objects without touching them. We were interested in the effects of feedback modality (auditory, haptic or visual) and its combined effect with input modality on user performance and experience in such interactions. Therefore, we conducted two exploratory experiments where numbers were entered, either by gaze or hand, using gestures composed of four stroke elements (up, down, left and right). In Experiment 1, a simple feedback was given on each stroke during the motor action of gesturing: an audible click, a haptic tap or a visual flash. In Experiment 2, a semantic feedback was given on the final gesture: the executed number was spoken, coded by haptic taps or shown as text. With simultaneous simple feedback in Experiment 1, performance with hand input was slower but more accurate than with gaze input. With semantic feedback in Experiment 2, however, hand input was only slower. Effects of feedback modality were of minor importance; nevertheless, semantic haptic feedback in Experiment 2 showed to be useless at least without extensive training. Error patterns differed between both input modes, but again not dependent on feedback modality. Taken together, the results show that in designing gestural systems, choosing a feedback modality can be given a low priority; it can be chosen according to the task, context and user preferences.

Acknowledgements

We wish to thank Jussi Rantala for building the haptic feedback device for us and other members of the HAGI project for comments on the experimental set-up.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

Additional information

Funding

The work reported is performed within the Transregional Collaborative Research Centre SFB/TRR 62 ‘Companion – Technology for Cognitive Technical Systems’ funded by the German Research Foundation (DFG), Transfer project (T01).The work of Päivi Majaranta was funded by the Academy of Finland, project Haptic Gaze Interaction (#260179). The work of Poika Isokoski was funded by the Academy of Finland: Mind, Picture, Image (#266285).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.