Abstract
In a previous paper, the authors built a neural network model to recognize Japanese sign language syllabary or yubimoji. One of the problems encountered in that study was the accurate digital representation and distinction of similar yubimoji gestures, i.e. gestures with the same finger flexure positions but with different hand/finger orientations. This study focuses on these yubimoji gestures. Using data from a glove interface with bend sensors and accelerometers, a neural network was built, trained and tested. The network performed well and good results were obtained.
Acknowledgements
The authors wish to acknowledge the cooperation of Mr Toshikazu Watanabe, Mr Yutaka Iijima, and Ms Yukiko Watanabe of the Gunma Federation of the Hearing Impaired (Japan).
Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.