Publication Cover
Assistive Technology
The Official Journal of RESNA
Volume 30, 2018 - Issue 3
405
Views
9
CrossRef citations to date
0
Altmetric
Original Articles

Ambient intelligence framework for real-time speech-to-sign translation

, PhD & , PhD
Pages 119-132 | Accepted 30 Nov 2016, Published online: 02 Feb 2017

References

  • Acampora, G., Loia, V., Nappi, M., & Ricciardi, S. (2005). Ambient intelligence framework for context aware adaptive applications. In Computer Architecture for Machine Perception (pp. 327–332). CAMP. USA: IEEE.
  • Brown, P. F., Cocke, J., Della Pietra, S. A., Della Pietra, V. J., Jelinek, F., Lafferty, J. D., … Roossin, P. S. (1990). A statistical approach to machine translation. Computational Linguistics, 16, 79–85.
  • Bungeroth, J., & Ney, H. (2004). Statistical sign language translation. In Workshop on Representation and Processing of Sign Languages, LREC (Vol. 4). Cambridge, MA: MIT Press.
  • Carreiras, M., Gutierrez-Sigut, E., Baquero, S., & Corina, D. (2008). Lexical processing in Spanish sign language (LSE). Journal of Memory and Language, 58(1), 100–122. doi:10.1016/j.jml.2007.05.004
  • Casacuberta, F., & Vidal, E. (2004). Machine translation with inferred stochastic finite-state transducers. Computational Linguistics, 30(2), 205–225. doi:10.1162/089120104323093294
  • Chang, C.-M. (2015). Innovation of a smartphone app design as used in face-to-face communication for the deaf/hard of hearing. Online Journal of Art and Design, 3(4), 74–87.
  • Chen, C. B., Wu, X. L., & Huang, W. (2009). Fast integral projection algorithm. Journal of Chinese Computer Systems, 4, 041.
  • Chuan, C. H., & Guardino, C. A. (2016, March). Designing SMARTSIGNPLAY: An interactive and intelligent american sign language app for children who are deaf or hard of hearing and their families. In Companion Publication of the 21st International Conference on Intelligent User Interfaces, Sonoma, California, USA (pp. 45–48). ACM.
  • CMU Sphinx [Computer software]. (2016). Retrieved from http://cmusphinx.sourceforge.net/
  • Correa, E., David, J., Martínez González, B., San Segundo Hernández, R., Cordoba Herralde, R. D., & Ferreiros López, J. (2014). Dynamic topic-based adaptation of language models: A comparison between different approaches. IberSPEECH, 8854, 139–148.
  • Digalakis, V., Monaco, P., & Murveit, H. (1996). Genones: Generalized mixture tying in continuous hidden Markov model based speech recognizers. IEEE Transactions on Speech and Audio Processing, 4(4), 281–289. doi:10.1109/89.506931
  • Dreuw, P., Stein, D., Deselaers, T., Rybach, D., Zahedi, M., Bungeroth, J., & Ney, H. (2008). Spoken language processing techniques for sign language recognition and translation. Technology and Disability, 20(2), 121–133.
  • Hatano, M., Sako, S., & Kitamura, T. (2015). Contour-based hand pose recognition for sign language recognition. Speech and Language Processing for Assistive Technologies, SLPAT, 17–21.
  • Heiberger, R. M., & Robbins, N. B. (2014). Design of diverging stacked bar charts for Likert scales and other applications. Journal of Statistical Software, 57(5), 1–32. doi:10.18637/jss.v057.i05
  • Hoang, H., & Koehn, P. (2008). Design of the Moses decoder for statistical machine translation. In Software Engineering, Testing, and Quality Assurance for Natural Language Processing (pp. 58–65). Stroudsburg, PA, USA: Association for Computational Linguistics.
  • Jain, D., Findlater, L., Gilkeson, J., Holland, B., Duraiswami, R., Zotkin, D., & Froehlich, J. E. (2015, April). Head-mounted display visualizations to support sound awareness for the deaf and hard of hearing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of South Korea (pp. 241–250). ACM
  • Kim, K.-W., Choi, J.-W., & Kim, Y.-H. (2013). An assistive device for direction estimation of a sound source. Assistive Technology, 25(4), 216–221. doi:10.1080/10400435.2013.768718
  • Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., … Herbst, E. (2007, June). Moses: Open source toolkit for statistical machine translation. The 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, Prague, Czech Republic (pp. 177–180).
  • Koehn, P., Och, F. J., & Marcu, D. (2003, May). Statistical phrase-based translation. Human Language Technology Conference, HLT-NAACL (pp. 127–133). Gold Coast, QLD, Australia: ACM.
  • Kraut, R., Patterson, M., Lundmark, V., Kiesler, S., Mukophadhhyay, T., & Scherlis, W. (1998). Internet paradox: A social technology that reduces social involvement and psychological well-being? American Psychologist, 53(9), 1017–1031. doi:10.1037/0003-066X.53.9.1017
  • Lagun, D., Hsieh, C. H., Webster, D., & Navalpakkam, V. (2014, July). Towards better measurement of attention and satisfaction in mobile search. International ACM SIGIR Conference on Research & Development in Information Retrieval (pp. 113–122).
  • Lamere, P., Kwok, P., Walker, W., Gouva, E., Singh, R., Raj, B., & Wolf, P. (2003). Design of the CMU Sphinx-4 decoder. France: INTERSPEECH.
  • Levenshtein, V. (1966). Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10, 70.
  • Lin, F. R., Niparko, J. K., & Ferrucci, L. (2011). Hearing loss prevalence in the United States. Arch Intern Med, 171(20), 1851–1852. doi:10.1001/archinternmed.2011.506
  • López-Ludeña, V., González-Morcillo, C., López, J. C., Barra-Chicote, R., Córdoba, R., & San-Segundo, R. (2014). Translating bus information into sign language for deaf people. Engineering Applications of Artificial Intelligence, 32, 258–269. doi:10.1016/j.engappai.2014.02.006
  • López-Ludeña, V., González-Morcillo, C., López, J. C., Ferreiro, E., Ferreiros, J., & San-Segundo, R. (2014). Methodology for developing an advanced communications system for the deaf in a new domain. Knowledge-Based Systems, 56, 240–252. doi:10.1016/j.knosys.2013.11.017
  • Mielke, M., & Brück, R. (2015, October). A pilot study about the smartwatch as assistive device for deaf people. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, Lisbon, Portugal (pp. 301–302). ACM.
  • Och, F. J., & Ney, H. (2000, October). Improved statistical alignment models. The Annual Meeting of the Association for Computational Linguistics, Hong Kong (pp. 440–447). ACM.
  • Och, F. J., & Ney, H. (2003). A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1), 19–51. doi:10.1162/089120103321337421
  • Othman, A., & Jemni, M. (2011). Statistical sign language machine translation: From English written text to American sign language gloss. Arxiv Preprint Arxiv:1112.0168, 8, 65–63.
  • PowerDiversity YouTube Channel. (2016). Retrieved from https://www.youtube.com/user/PowerDiversity
  • Quartz [Computer software]. (2016). Retrieved from http://qz.com/699518/we-talked-to-the-oxford-philosopher-who-gave-elon-musk-the-theory-that-we-are-all-computer-simulations/
  • Stokoe, W. C. (2005). Sign language structure: An outline of the visual communication systems of the American deaf. Journal of Deaf Studies and Deaf Education, 10(1), 3–37. doi:10.1093/deafed/eni001
  • Stolcke, A. (2002). SRILM—An extensible language modeling toolkit (Vol. 2002). France: INTERSPEECH.
  • Sutton, V., & Frost, A. (2013). SignWriting hand symbols manual (1st ed.). CA, USA: SignWriting Press.
  • The Eye Tribe [Computer software]. (2016). Retrieved from https://theeyetribe.com/
  • Tharwat, A., Gaber, T., Hassanien, A. E., Shahin, M. K., & Refaat, B. (2015). Sift-based Arabic sign language recognition system. Afro-European Conference for Industrial Advancement, 334, 359–370.
  • Tran, J. J., Flowers, B., Risken, E. A., Ladner, R. J., & Wobbrock, J. O. (2014, October). Analyzing the intelligibility of real-time mobile sign language video transmitted below recommended standards. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility, Rochester, NY, USA (pp. 177–184). ACM.
  • Uchida, T., Umeda, S., Azuma, M., Miyazaki, T., Kato, N., Hiruma, N., & Nagashima, Y. (2016, June). Provision of emergency information in sign language CG animation over integrated broadcast-broadband system. In 2016 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Nara, Japan (pp. 1–4). IEEE
  • Unity 3D Engine [Computer software]. (2016). Retrieved from https://unity3d.com
  • Vincent, C., Deaudelin, I., & Hotton, M. (2007). Pilot on evaluating social participation following the use of an assistive technology designed to facilitate face-to-face communication between deaf and hearing persons. Technology and Disability, 19(4), 153–167.
  • World Federation of the Deaf (WFD). (2016). Human rights. Retrieved from https://wfdeaf.org/human-rights
  • World Health Organization. (2016). Deafness and hearing loss. Retrieved from http://www.who.int/mediacentre/factsheets/fs300/en/
  • Zhao, L., Kipper, K., Schuler, W., Vogler, C., & Palmer, M. (2000). A machine translation system from English to American sign language. Envisioning Machine Translation in the Information Future, 1934, 54–67.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.