Publication Cover
Assistive Technology
The Official Journal of RESNA
Volume 32, 2020 - Issue 4
1,122
Views
7
CrossRef citations to date
0
Altmetric
Review Article

Assistive technologies for severe and profound hearing loss: Beyond hearing aids and implants

, PhDORCID Icon & , MSc
Pages 182-193 | Accepted 03 Sep 2019, Published online: 17 Jan 2019

References

  • Agarwal, A., & Thakur, M. (2013). Sign language recognition using Microsoft Kinect. Contemporary Computing (IC3), 2013 Sixth International Conference on, 181–185. doi:10.1177/1753193413497191
  • AlerterGroup. (2017, April 27). Retrieved from http://alertergroup.com/products/deaf-alerter/
  • Al-Jarrah, O., & Halawani, A. (2001). Recognition of gestures in Arabic sign language using neuro-fuzzy systems. Artificial Intelligence, 117–138. doi:10.1016/S0004-3702(01)00141-2
  • Allen, J., Asselin, P., & Foulds, R. (2003). American Sign Language finger spelling recognition system. Bioengineering Conference, 2003 IEEE 29th Annual, Proceedings of, 2003, (pp. 285–286). New Jersey, NJ.
  • Araujo, A. F., Brasil, F. L., Candido, L. S., Junior, A. D. S. B. L., Dutra, P. F. S., & Batista, E. C. F. C. (2017). Auris system: providing vibrotactile feedback for hearing impaired population. BioMed Research International, 1–9.
  • Bahan, B. (2004). The visual people. Paper presented at the conference Deaf Studies Today. Utah Valley State College, Orem.
  • Bragg, D., Huynh, N., & Ladner, E. R. (2016). A personalizable mobile sound detector app design for deaf and hard-of-hearing users. 18th International ACM SIGACCESS Conference on Computers and Accessibility (pp. 3–13). Reno: ACM.
  • Brashear, H., Starner, T., Lukowicz, P., & Junker, H. (2003). Using multiple sensors for mobile sign language recognition. 7th IEEE International Symposium on Wearable Computers (pp. 45–52). Washington, DC: IEEE. doi:10.1109/ISWC.2003.1241392
  • British Deaf Association. (2018, April 27). Retrieved from https://bda.org.uk/help-resources/
  • Carlson, D., & Ehrlich, N. (2005). Assistive technology and information technology use and need by persons with disabilities in the United States 2001. US Department of Education, national Institude on Disability and Rehabilitation Research.
  • Cavender, A., Ladner, E. R., & Riskin, A. E. (2006). MobileASL: Intelligibility of sign language video as constrained by mobile phone technology. 8th International ACM SIGACCESS conference on Computers and accessibility (pp. 71–78). Portland, OR: ACM. doi:10.1016/j.jad.2006.02.014
  • Cavender, C. A., Bigham, P. J., & Ladner, E. R. (2009). ClassInFocus: Enabling improved visual attention strategies for deaf and hard of hearing students. Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility (pp. 67–74). Pittsburgh, PA: ACM.
  • Cox, S., Lincoln, M., Tryggvason, J., Nakisa, M., Wells, M., Tutt, M., & Abbott, S. (2002). Tessa, a system to aid communication with deaf people. Proceedings of the fifth international ACM conference on Assistive technologies - Assets ‘02. doi:10.1044/1059-0889(2002/er01)
  • Daoud, M., Al-Ashi, M., Abawi, F., & Khalifeh, A. (2015). In-house alert sounds detection and direction of arrival estimation to assist people with hearing difficulties. IEEE/ACIS 14th International Conference on Computer and Information Science (ICIS), 297–302. doi:10.3389/fphar.2014.00297
  • Debevc, M., Stjepanovič, Z., & Holzinger, A. (2014). Development and evaluation of an e-learning course for deaf and hard of hearing based on the advanced Adapted Pedagogical Index method. Interactive Learning Environments, 22(1), 35–50. doi:10.1080/10494820.2011.641673
  • Ditcharoen, N., & Cercone. (2010). SignMT: An alternative language learning tool. Computers & Education, 55(1), 118–130. doi:10.1016/j.compedu.2009.12.009
  • Dolnick, E. (1993, September). Hearing loss as Culture. The Atlantic Monthly, (pp. 37–53).
  • Domingo, M. C. (2012). An overview of the Internet of Things for people with disabilities. Journal of Network and Computer Applications, 35, 584–596. doi:10.1016/j.jnca.2011.10.015
  • Drigas, A. S., Vrettaros, J., & Kouremenos, D. (2004). Teleeducation and e-learning services for teaching English as a second language to Deaf people, whose first language is the Sign Language. WSEAS Transactions on Information Science and Applications, 1(3), 834–842.
  • Drigas, S. A., Kouremenos, D., Kouremenos, S., & Vrettaros, J. (2005). An e-Learning System for the Deaf people. ITHET 6th Annual International Conference. Juan Dolio, Dominican Republic: IEEE.
  • El-Gayyar, Ibrahim, & Wahed. (2016). Translation from Arabic speech to Arabic Sign Language based on cloud computing. Egyptian Informatics Journal, 17(3), 295–303. doi:10.1016/j.eij.2016.04.001
  • Elliot, E. A., & Jacobs, A. M. (2013). Facial Expressions, Emotions, and Sign Languages. Frontiers in Psychology, 4(115), 1–4. doi:10.3389/fpsyg.2013.00115
  • Elliott, R., Glauert, R. J., Kennaway, R. J., Marshall, I., & Safar, E. (2008). Linguistic modelling and language-processing technologies for Avatar-based sign language presentation. Universal Access in the Information Society, 6, 375–391. doi:10.1007/s10209-007-0102-z
  • Escudeiro, P., Escudeiro, N., Reis, R., Lopes, J., Norberto, M., Baltasar, A., & Bidarra, J. (2015). Virtual Sign – A Real Time Bidirectional Translator of Portuguese Sign Language. Procedia Computer Science, 67, 252–262. doi:10.1016/j.procs.2015.09.269
  • Fajardo, I., Vigo, M., & Salmerón, L. (2009). Technology for supporting web information search and learning in Sign Language. Interacting with Computers, 21(4), 243–256. doi:10.1016/j.intcom.2009.05.005
  • Fellinger, J., Holzinger, D., Gerich, J., & Goldberg, D. (2010). Quality of life measures in the deaf. In V. R. Preedy & R. R. Watson (Eds.), Handbook of Disease Burdens and Quality of Life Measures (pp. 3853–3870). New York, NY: Springer.
  • Fujii, M., Mandana, A., Takakai, T., Watanabe, Y., Kamata, K., Ito, A., & Kakuda, Y. (2007). A study on deaf people supporting systems using cellular phone with Bluetooth in disasters. IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, (pp. 1–6). Espoo, Finland.
  • Gorman, B. M. (2014). VisAural:: A wearable sound-localisation device for people with impaired hearing. 16th international ACM SIGACCESS conference on Computers & accessibility, (pp. 337–338). Rochester: ACM.
  • Hannukainen, P., & Ottto, K. H. (2006). Identifying customer needs: Disabled persons as lead users. In: Proceedings of ASME 2006 International Design Engineering Technical Conferences, (pp. 243–251). Pennsylvania, PA.
  • Harrington, T. (2018, April 27). Sign language of the world by name. Retrieved from Galllaudet University Library http://libguides.gallaudet.edu/content.php?pid=114804&sid=991940
  • Honda, T., & Okamoto, M. (2014). User Interface Design of Sound Tactile. In K. Miesenberger, D. Fels, D. Archambault, P. Peňáz, & W. Zagler (Eds.), Computers Helping People with Special Needs (pp. 382–385). Linz, Austria: Springer.
  • iSigner. (2017, April 27). Retrieved from: https://isigner.com/
  • Kennaway, J. R., Glauert, J. R., & Zwitserlood, I. (2007). Providing signed content on the Internet by synthesized animation. ACM Transactions on Computer-Human Interaction, 14, 3. doi:10.1145/1279700.1279705
  • Kessler, D. G., Walker, N., & Hodges, F. L. (1995). Evaluation of the CyberGlove(TM) as a Whole Hand Input Device. ACM Transactions on Computer-Human Interaction, 2, 4. doi:10.1145/212430.212431
  • Ketabdar, H., & Polzehl, T. (2009). Tactile and Visual Alerts for Deaf People by Mobile Phones. 11th international ACM SIGACCESS conference on Computers and accessibility (pp. 253–254). Pittsburgh: ACM. doi:10.1152/japplphysiol.90960.2008
  • Kheir, R., & Way, T. (2007). Inclusion of deaf students in computer science classes using real-time speech transcription. ACM SIGCSE Bulletin, 39(3), 261–265. doi:10.1145/1269900
  • Kim, J. S., & Kim, C. H. (2014). A review of assistive listening device and digital wireless technology for hearing instruments. Korean Journal Audiology, 18(3), 105–111. doi:10.7874/kja.2014.18.3.105
  • Kim, K., Choi, J.-W., & Kim, Y.-H. (2014). Detection and direction estimation of a sudden loud sound for the hearing assistive eyeglasses. In Proceedings of Inter.Noise International Congress on Noise Control Engineering, (pp. 1–10). Melbourne, Australia.
  • Kouremenos, D., Fotinea, S., Efthimiou, E., & Ntalianis, K. (2010). A prototype Greek text to Greek Sign Language conversion system. Behaviour & Information Technology, 29(5), 467–481. doi:10.1080/01449290903420192
  • Kumari, P., Goel, P., & Reddy, N. R. S. (2015). PiCam: IoT based wireless alert system for deaf and hard of hearing. International Conference on Advanced Computing and Communications (ADCOM) (pp. 39–44). Haryana, India: IEEE. doi:10.1109/ADCOM.2015.14
  • Kyle, F. E., & Cain, K. (2015). A comparison of deaf and hearing children’s reading comprehension profiles. Topics in Language Disorders, 35, 144–157. doi:10.1097/TLD.0000000000000053
  • LANCET. (2016). Hearing loss: An important global health concern. The LANCET, 387(10036), 2351. doi:10.1016/S0140-6736(16)30777-2
  • Lane, H. (2015). Ethnicity, ethics, and the deaf-world. The Journal Of Deaf Studies and Deaf Education, 10(3), 291–310. doi: doi.https://doi.org/10.1093/deafed/eni030
  • Lang, S., Block, M., & Rojas, R. (2012). Sign Language Recognition Using Kinect. Artificial Intelligence and Soft Computing Lecture Notes in Computer Science, (pp. 394–402).
  • Lee, S., Kang, S., Han, D., & Ko, H. (2016). Dialogue enabling speech-to-text user assistive agent system for hearing-impaired person. Medical and Biological Engineering and Computing, 54(6), 915–926. doi:10.1007/s11517-015-1447-8
  • Li, F. K., Lothrop, K., Gill, E., & Lau, S. (2011). A web-based sign language translator using 3D video processing. 14th International Conference on Network-Based Information Systems (pp. 356–361). Tirana, Albania: IEEE Computer Society. doi:10.1109/NBiS.2011.60
  • Li, J., Yin, B., Wang, L., & Kong, D. (2014). Chinese Sign Language animation generation considering context. Multimedia Tools and Applications, 71(2), 469–483. doi:10.1007/s11042-013-1541-6
  • Liddell, S. K. (1983). Review: American Sign Language Syntax. Language, 59(1), 221–224. doi:10.2307/414075
  • Maiorana-Basas, M., & Pagliaro, C. M. (2014). Technology use among adults who are deaf and hard of hearing: A national survey. The Journal of Deaf Studies and Deaf Education, 19(3), 400–410. doi:10.1093/deafed/enu005
  • Marschark, M., & Harris, M. (1996). Success and failure in learning to read: The special case of deaf children. In C. Coronoldi & J. Oakhill (Eds.), Reading Comprehension Difficulties: Process and Intervention (pp. pp. 279–300). Hillsdale, NJ: Lawrence Erlbaum.
  • Marshall, M. T., & Wanderley, M. M. (2006). Vibrotactile feedback in digital musical instruments. Proceedings of the 2006 conference on new interfaces for musical expression. IRCAM-Centre Pompidou (pp. 2260229).
  • Matsuda, A., Nakamura, H., & Sugaya, M. (2014). Luminous device for the deaf and hard of hearing people. HAI 2014 - Proceedings of the 2nd International Conference on Human-Agent Interaction, (pp. 201–204). Tsukuba, Japan.
  • Mielke, M., & Bruck, R. (2016). AUDIS Wear: A Smartwatch based Assistive Device for Ubiquitous Awareness of Environmental Sounds. 38th Annual International Conference of the Engineering in Medicine and Biology Society (EMBC) (pp. 5343–5347). Orlando: IEEE.
  • Mielke, M., Grünewald, A., & Brück, R. (2013). An Assistive Technology for Hearing-Impaired Persons: Analysis, Requirements and Architecture. 35th Annual International Conference of the IEEE EMBS (pp. 4702–4705). Osaka, Japan: IEEE. doi:10.1109/EMBC.2013.6610597
  • Mirzaei, R. M., Ghorshi, S., & Mortazavi, M. (2012). Using augmented reality and automatic speech recognition techniques to help Deaf and Hard of Hearing people. Virtual Reality International Conference. Rio Janiero, Brazil: ACM. doi:10.1145/2331714.2331720
  • Motlhabi, B. M., Glaser, M., & Tucker, D. W. (2013). SignSupport: A limited communication domain mobile aid for a Deaf patient at the pharmacy. In R. Volkwyn (Ed.), Proc. Southern African Telecommunication Networks and Applications Conference (pp. 173–178). Stellenbosch, South Africa.
  • Nanayakkara, C. S., Wyse, L., Ong, H. S., & Taylor, A. E. (2013). Enhancing Musical Experience for the Hearing-Impaired Using Visual and Haptic Displays. Human–Computer Interaction, 28(2), 115–160.
  • Ng’ethe, G. G., Blake, E. H., & Glaser, M. (2015). SignSupport: A Mobile Aid for Deaf People Learning Computer Literacy Skills. 7th International Conference on Computer Supported Education (pp. 501–511). Lisbon, Portugal: SCITEPRESS. doi:10.1177/1753193415604795
  • Padden, C., & Humphries, T. (1998). Deaf in America: Voices from a culture. Cambridge, MA: Harvard University Press.
  • Paredes, H., Fonseca, B., Cabo, M., Pereira, T., & Fernandes, F. (2014). SOSPhone: A mobile application for emergency calls. Universal Access in the Information Society, 13(3), 277–290. doi:10.1007/s10209-013-0318-z
  • Parton, S. B., Hancock, R., & Du BusdeValempré, D. A. (2010). Tangible manipulatives and digital content: The transparent link that benefits young deaf children. 9th International Conference on Interaction Design and Children (pp. 300–303). Barcelona, Spain: ACM. doi:10.1145/1810543.1810597
  • Paudyal, P., Banerjee, A., & Gupta, K. S. S. (2016). SCEPTRE: A Pervasive, Non-Invasive, and Programmable Gesture Recognition Technology. 21st International Conference on Intelligent User Interfaces (pp. 282–393). Sonoma, CA: ACM. doi:10.1145/2856767.2856794
  • Petry, B., Illandara, T., & Nanayakkara, S. (2016). MuSS-Bits: Sensor-Display Blocks for Deaf People to explore musical sounds. 28th Australian Conference on Computer Human-Interaction. Launceston, Australia: ACM. doi:10.1145/3010915.3010939
  • Pouris, M., & Fels, I. D. (2012). Creating an Entertaining and Informative Music Visualization. In K. Miesenberger, A. Karshmer, P. Penaz, & W. Zagler (Eds.), Computers Helping People with Special Needs (pp. 451–458). Linz, Austria: Springer. doi:10.1007/978-3-642-31522-0_68
  • Ravid, U., & Cairns, P. (2008). The design development of a mobile alert for the deaf and the hard of hearing. (Doctoral dissertation, University College London). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.133.9853&rep=rep1&type=pdf
  • Rekha, K., & Latha, B. (2014). Mobile Translation System from Speech Language to Hand Motion Language. International Conference on Intelligent Computing Applications (pp. 411–415). Coimbatore, India: IEEE. doi:10.1109/ICICA.2014.90
  • Ren, H., Meng, Q. M., & Chen, X. (2006). Wireless Assistive Sensor Networks for the Deaf. IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 4804–4808). Beijing: IEEE.
  • RNID. (2012, February 21). Banks face massive payouts to Deaf customers. Action on Hearing Loss. Retrieved March 18, 2017, from https://web.archive.org/web/20130604102110/http://www.actiononhearingloss.org.uk/news-and-events/all-regions/press-releases/banks-face-massive-payouts-to-deaf-customers.aspx?jse=1
  • RNID. (2017, March 18). Statistics. Action on Hearing Loss. A national charity since 1911. Retrieved from https://www.actiononhearingloss.org.uk/your-hearing/about-hearingloss-and-hearing-loss/statistics.aspx
  • Romaniuk, J., Suszczańska, N., & Szmal, P. (2011). Semantic Analyzer in the Thetos-3 System. In Z. Vetulani (Ed.), Human Language Technology. Challenges for Computer Science and Linguistics. LTC 2009. Lecture Notes in Computer Science (Vol. 6562, pp. 234–244). Berlin, Heidelberg: Springer.
  • Romaniuk, J., Suszczanska, N., & Szmal, P. (2009). Semantic analyzer in the thetos-3 system. Human language technology. Challenges for Computer Science and Linguistics, pp. 234–244.
  • Sahin, M. I., Sagers, J. E., & Stankovic, K. M. (2017). Cochlear Implantation: Vast Unmet Need to Address Hearing loss Globally. Otology & Neurotology, 38(6), 786–787. doi:10.1097/MAO.0000000000001416
  • Sarji, D. K. (2008). HandTalk: Assistive Technology for the Deaf. Computer, 41(7), 84–86. doi:10.1109/mc.2008.226
  • Slyper, L., Ko, Y., Kim, K. M., & Sobek, I. (2016). LifeKey: Emergency Communication Tool for the Deaf. CHI’16 Extended Abstracts (pp. 62–67). San Jose, CA: ACM. doi:10.1145/2851581.2890629
  • Sorgini, F., Caliò, R., Carrozza, M. C., & Oddo, C. M. (2018). Haptic-assistive technologies for audition and vision sensory disabilities. Disability Rehabilitation Assisted Technological, 13(4), 394–421. doi:10.1080/17483107.2017.1385100
  • Suharjito, Anderson, R., Wiryana, F., Ariesta, M. C., & Kusuma, G. J. (2017). Sign Language Recognition Application Systems for Deaf-Mute People: A Review Based on Input-Process-Output. Procedia Computer Science, 116, 441–448. doi:10.1016/j.procs.2017.10.028
  • Suszczanska, N., Szmal, P., & Kulikow, S. (2007). Continuous text translation using text modeling in the thetos system. International Journal of Computer, Electrical, Automation, Control and Information Engineering, 1(8), 2632–2635.
  • Trigueiros, P., Ribeiro, F., & Reis, L. P. (2014). Vision-Based Portuguese Sign Language Recognition System. In A. Correia, F. Tan, & K. Stroetmann (Eds.), New Perspectives in Information Systems and Technologies (Vol. 1, pp. 605–617). Madeira, Portugal: Springer.
  • Wauters, L. N. (2005). Reading comprehension in deaf children: The impact of the mode of acquisition. (Unpublished doctoral dissertation), Proefschrift Radboud Universiteit Nijmegen.
  • WHO. (2017, March 24). Hearing loss and hearing loss. World Health Organization. Retrieved from http://www.who.int/mediacentre/factsheets/fs300/en/
  • Woll, B. (2001). The sign that dares to speak its name: Echo phonology in British Sign Language. In P. Boyes Braem & R. Sutton-Spence (Eds.), The Hands are the Head of the Mouth (pp. 87–98). Leiden, The Netherlands: Signum-Verlag.
  • Zhao, X., Naguib, M. A., & Lee, S. (2014). Kinect based calling gesture recognition for taking order service of elderly care robot. The 23rd IEEE International Symposium on Robot and Human Interactive Communication. Edinburgh, Scotland: IEEE. doi:10.1109/ROMAN.2014.6926306
  • Zhou, Y., Sim, C. K., Tan, P., & Wang, Y. (2012). MOGAT: Mobile games with auditory training for children with cochlear implants. 20th ACM international conference on Multimedia (pp. 429–438). Nara, Japan: ACM. doi:10.1145/2393347.2393409

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.