5,134
Views
12
CrossRef citations to date
0
Altmetric
Review

Using haptic stimulation to enhance auditory perception in hearing-impaired listeners

Pages 63-74 | Received 30 Jul 2020, Accepted 10 Dec 2020, Published online: 29 Dec 2020

ABSTRACT

Introduction

Hearing-assistive devices, such as hearing aids and cochlear implants, transform the lives of hearing-impaired people. However, users often struggle to locate and segregate sounds. This leads to impaired threat detection and an inability to understand speech in noisy environments. Recent evidence suggests that segregation and localization can be improved by providing missing sound-information through haptic stimulation.

Areas covered

This article reviews the evidence that haptic stimulation can effectively provide sound information. It then discusses the research and development required for this approach to be implemented in a clinically viable device. This includes discussion of what sound information should be provided and how that information can be extracted and delivered.

Expert opinion

Although this research area has only recently emerged, it builds on a significant body of work showing that sound information can be effectively transferred through haptic stimulation. Current evidence suggests that haptic stimulation is highly effective at providing missing sound-information to cochlear implant users. However, a great deal of work remains to implement this approach in an effective wearable device. If successful, such a device could offer an inexpensive, noninvasive means of improving educational, work, and social experiences for hearing-impaired individuals, including those without access to hearing-assistive devices.

1. Introduction

Over the past half a century, dramatic advances in hearing-assistive device technology have enabled it to transform the lives of people with hearing impairment. One prominent example is the cochlear implant (CI), which enables severely-to-profoundly hearing-impaired individuals to perceive sounds through electrical stimulation of the auditory nerve. The CI stands as one of modern medicine’s greatest achievements, allowing users to follow a conversation in a quiet environment with a similar accuracy to normal-hearing listeners [e.g. Citation1]. However, significant limitations to CIs remain, with users often having considerable difficulties locating and segregating sounds [Citation2,Citation3]. Similar issues are experienced by hearing aid users, though to a lesser extent [Citation3,Citation4]. These limitations lead to impaired threat detection and an inability to understand speech in noisy environments, such as schools, restaurants, and busy workplaces.

Recently, a new approach has been proposed that uses ‘electro-haptic stimulation’Footnote1 [Citation5], whereby sound information that is poorly transferred by the electrical CI signal is provided through haptic stimulation. Exciting new evidence indicates that electro-haptic stimulation (EHS) can substantially improve speech-in-noise performance [Citation5–8] and sound localization [Citation9,Citation10], as well as increasing sensitivity to more basic sound properties, such as pitch [Citation11]. If effective, this approach could be delivered noninvasively and inexpensively in a compact wrist-worn device.

In addition to improving performance for hearing-assistive device users, haptic devices might be used to aid those currently unable to access hearing-assistive technology. It is estimated that around 99% of potential CI candidates worldwide cannot access a CI [Citation12]. Furthermore, for those in low-to-middle income countries that can access a CI, surgical complication rates are around double that in high-income countries [Citation13–16]. However, the main prohibitive factor is cost. In India, for example, the personal average annual income is less than 2,000 USD, whereas the cost of a CI (not including hospital fees) is between 12,000 and 25,000 USD [Citation17]. The consequences of this untreated hearing loss are substantial. Young children with unmanaged hearing loss typically have large deficits in language and cognitive development and low educational attainment [Citation18–22]. Children that have a disabling hearing loss in low-to-middle income countries are also unlikely to complete primary education [Citation23]; strikingly, in India, less than a third of hearing-impaired children are enrolled in school at any level and less than 2% receive higher secondary education or above [Citation17]. Hearing-impaired adults in low-to-middle income countries have a much lower employment rate, and those that are employed tend to be in lower-grade occupations [Citation20,Citation24]. This means that, while hearing loss is often a result of poverty, it is also often a cause of poverty [Citation15,Citation20,Citation25]. The development of low-cost haptic devices to aid those with hearing impairments who are unable to access hearing-assistive devices could therefore have a substantial positive impact on quality of life, particularly in low-to-middle income countries.

This review is divided into two parts. The first examines existing work on the use of haptic stimulation to aid hearing-impaired listeners, whilst the second discusses how the promising work already undertaken could be translated into a clinically viable haptic device. It considers what sound information is most beneficial to hearing-impaired listeners, how to provide that information, and what the necessary requirements are for a successful haptic device.

2. Background

2.1. Enhancement of speech-in-noise performance

Work using haptic stimulation to aid those with hearing impairment dates back to at least the 1920s, when a desktop haptic device that stimulated the fingers was trialed to support deaf children in the classroom [Citation26–29]. For deaf individuals who were simultaneously lip reading, this device was reported to increase the number of words recognized by around 20% when there was no background noise. Later, beginning in the late 1960s, researchers in the visual sciences used a similar approach for blind individuals, delivering visual information through haptic stimulation on the fingers or back [Citation30–33]. Using this approach, participants were able to perceive depth and perspective, judge the speed and direction of a rolling ball, recognize faces and common objects, and complete complex inspection-assembly tasks. Interestingly, after training, participants reported that they experienced these ‘images’ as being externalized in front of them, rather than being located at the haptic stimulation site. These influential studies from both the auditory and visual sciences helped trigger an expansion of research into the use of haptic stimulation to aid deaf individuals, which peaked in the 1980s and 1990s [Citation34,Citation35]. The ‘tactile aids’ that were developed showed significant promise. One study, for example, showed that after extensive training with the Queen’s Tactile Vocoder device, it was possible to learn a vocabulary of 250 words [Citation36,Citation37]. However, in parallel to the development of tactile aids, CI technology underwent a revolution [Citation1]. By the mid-1990s, outcomes for CI users had substantially outstripped those achieved by tactile aids [Citation1,Citation34]; by the early 2000s, the success of the CI had caused the development and use of tactile aids to all but cease.

EHS uses haptic stimulation to augment CI listening, rather than as an alternative to the CI. To assess the potential for a new generation of haptic devices to aid hearing-impaired individuals, it is important to consider the limitations that prevented the widespread use of tactile aids in the 1980s and 1990s. One limitation was that these devices were not compact or discreet, and required a large battery pack that frequently needed to be recharged. For example, the body-worn processor unit alone for the Siemens Minifonator measured 84 × 82 x 30 mm and for the Tactaid II measured 93 × 57 x 23 mm [Citation38]. Another issue was that the electronics, microphones, and haptic stimulators were all connected by wires. This made many tactile aids difficult to self-fit, uncomfortable to wear, and raised safety concerns (for example, that wires might get caught on objects such as cups and saucepans). A further important limitation was the impossibility of performing advanced signal-processing. Many of these limitations are now considerably reduced due to the substantial developments in motor, battery, microprocessor, and wireless-communication technology. The time therefore seems right for a new generation of compact, discreet haptic devices to support the hearing impaired.

While tactile aids of the 1980s and 1990s were ineffective in noisy environments, two recent studies have shown that haptic stimulation can be used to improve speech-in-noise performance in CI users [Citation7] and normal-hearing listeners [Citation39]. However, there are two significant limitations to these studies. Firstly, haptic stimulation was delivered to the fingertip, which would disrupt many everyday activities if deployed in a clinical device; secondly, the haptic signal was extracted from the clean speech signal (without background noise), which is not available in the real world. If the clean speech signal was available, then this signal would simply be presented to the listener through the hearing aid or CI.

More recently, it has been shown that haptic signals extracted from speech in noise and delivered to the wrists can improve speech-in-noise performance for CI users [Citation5,Citation6,Citation8]. These studies showed benefits across participants who used a range of CI devices (from MED-EL, Advanced Bionics, and Cochlear Ltd). In one study, where speech and noise were both presented from directly in front of the listener, CI users recognized 8% more words in noise with EHS, with some participants recognizing over 20% more words [Citation5]. Another study explored whether EHS improved speech recognition when speech and noise were spatially separated. This study focused on the 95% of CI users that are only implanted in one ear [Citation40]. The speech was presented from directly in front of the listener and the noise was presented either to the implanted side or to the non-implanted side. For both noise positions, CI users’ speech-reception thresholds in noise were improved by around 3 dB when EHS was provided [Citation8]. In these studies of EHS enhancement, the signal processing was computationally lightweight so that it could be applied in real-time on a compact device. This demonstration of an effective and clinically viable approach marks an exciting advance in the translation of EHS from a research finding to an effective clinical tool.

The speech-in-noise performance benefit measured for EHS with spatially separated sounds is comparable to the improvement observed when patients use two implants rather one [see Citation8 for discussion,Citation41,Citation42]. However, implantation of a second device is expensive, risks loss of residual hearing and vestibular dysfunction, and limits access to future technologies and therapies. A noninvasive, inexpensive haptic device may therefore be an attractive alternative to a second implant.

For the many that have not received a second implant, another approach used to improve speech-in-noise performance is the mounting of an additional microphone behind the non-implanted ear. The audio from this microphone is transmitted to the implant so that the signals from the implanted and non-implanted sides can be combined. This contralateral routing of signal (CROS) approach aims to reduce the negative effects of the acoustic head-shadow when a sound of interest is on the non-implanted side. One study established whether CROS microphones benefit speech-in-noise performance when speech is presented in front of the listener and noise is presented either to the side with the implant or to the opposite side [Citation43]. Unexpectedly, no benefit of the CROS microphone was found when the noise was on the implanted side and the CROS microphone was found to impair performance when the noise was on the non-implanted side. Another study found that CROS microphones did not affect speech-in-noise performance when the speech and noise were both in front of the listener, and reduced performance when the speech was in front and the noise was on the implanted or non-implanted side [Citation44]. EHS, on the other hand, has been shown to produce clear benefits in each of these three speech and noise configurations. Other studies have found considerable benefits of CROS microphones under different conditions, such as when speech is located on the non-implanted side and noise comes from loudspeakers all around the listener [Citation45,Citation46]. To date, no studies have assessed EHS benefits under comparable conditions.

2.2. Enhancement of sound localization

In addition to studies showing that tactile aids can be used to provide speech information, a small number of studies showed that haptic stimulation on the fingertips could be used to locate sounds [Citation47–51]. However, despite this early promise, haptic sound-localization remains little studied. Building from this work, it was recently shown that EHS can be used to dramatically improve sound localization in CI users [Citation9]. In this study, the haptic signal was derived from the audio received by behind-the-ear hearing-assistive devices and delivered to each wrist. This allowed participants to access intensity differences between the ears [Citation52], which are key cues for sound localization. In CI users who were implanted in one ear (unilateral CI users), even without training, EHS was found to reduce RMS error in sound localization from 47° to 29°, making their performance similar to CI users implanted in both ears [bilateral CI users; Citation3,Citation53]. After a small amount of training with EHS (lasting around 15 minutes), performance improved substantially, becoming similar to that of bilateral hearing-aid users [Citation3,Citation54]. Another recent study, which used a similar approach but with a more sophisticated signal-processing strategy, found still greater haptic sound-localization accuracy [Citation10]. Researchers have explored whether CROS microphones improve sound localization for unilateral CI users, but found no clear benefit [Citation55].

The same EHS approach used for haptic sound-localisation has been shown to enhance speech-in-noise performance for spatially separated sounds [Citation8]. The signal-processing approach is also similar to EHS approaches that have been shown to enhance speech-in-noise performance for co-located sounds [Citation5,Citation6]. Future work should aim to unify these promising signal-processing strategies.

2.3. Enhancement of music perception

CI users frequently suffer from an inability to appreciate and enjoy music [Citation56]. This is primarily due to the implant’s inability to provide frequency information, which conveys critical melody, harmony, and tonality information, and is important for sound segregation [Citation56–58]. Some studies have shown evidence that melody recognition can be improved using haptic stimulation at either the fingertip [Citation59] or wrist [Citation60]. Another study showed that a haptic device on the forearm could be used to substantially improve discrimination of changes in fundamental frequency (an acoustic correlate of pitch). Participants were able to discriminate fundamental-frequency shifts of just 1% [Citation11], which is less than the smallest pitch change found in most western melodies and substantially better than typical CI users [Citation61,Citation62]. This performance was maintained even in the presence of high levels of inharmonic background noise (with signal-to-noise ratios as low as −7.5 dB). However, an important challenge for this and other approaches will be to extract sound information for a single harmonic sound against a background of other harmonic sounds, such as in a polyphonic musical piece. This may be aided by the recent emergence of object-based audio encoding for music, film, and gaming, which gives access to individual sounds within a musical piece or auditory scene [e.g. Citation63,Citation64].

For the promising findings discussed to be successfully translated into a clinically viable haptic device, there are several important questions that must be addressed: (1) how will the audio signal be acquired; (2) how will this audio signal be processed and converted to haptic stimulation; (3) how and where will haptic stimulation be delivered; and (4) what are the key specifications for a successful haptic device? These questions will be considered in the following section.

3. Priorities for haptic provision and device design

3.1. Audio signal acquisition

The first challenge for a haptic device will be how to capture the audio that is transformed to haptic stimulation. In one proposed approach, the audio signal is streamed from behind-the-ear CIs or hearing aids that are either already worn by the user or are fitted in addition to an existing device [Citation5,Citation6,Citation8,Citation9]. One advantage of this approach is that technology already deployed in hearing-assistive devices, such as beamforming [Citation2,Citation65], can be exploited. In beamforming, the difference in the arrival time at multiple microphones mounted within a single device is used to steer the maximum sensitivity toward the sound source of interest (typically in front of the listener) and reduce sensitivity to sources from other locations (typically to the back and sides). This approach has been shown to substantially improve speech-in-noise performance [Citation66]. Another highly effective approach used with hearing-assistive devices is remote microphones, which are placed close to the sound source of interest [Citation66]. Remote microphones, such as the Roger Pen or Oticon ConnectClip, use Bluetooth or radio to stream audio directly to the hearing-assistive device. A haptic device that streamed audio from a hearing-assistive device could benefit from this existing technology.

Streaming audio from hearing-assistive devices has further advantages. Firstly, the haptic device could benefit from some of the signal processing already performed by the hearing-assistive device (such as pre-emphasis and microphone frequency-response correction filtering). Secondly, if audio is streamed from hearing-assistive devices behind each ear, haptic devices will have access to spatial-hearing cues (such as intensity differences between the ears), which has been exploited in previous work to improve sound localization [Citation9,Citation10,Citation52]. Finally, streaming audio from the same source as the hearing-assistive device will maximize the correlation between audio and haptic signals, which is critical for effective multisensory integration [Citation67–71].

For audio streaming from a hearing-assistive device to be viable for real-world use, low-power wireless streaming technology is required. One new technology, which is available in many of the latest hearing-assistive devices, is Bluetooth Low Power (LE). Bluetooth LE has greatly reduced power consumption compared to classic Bluetooth, allows higher-quality audio streaming, and supports multiple simultaneous data streams. An alternative to Bluetooth LE, which is used by Advanced Bionics and Phonak for streaming between hearing-assistive devices, is low-frequency radio. Low-frequency radio allows extremely low-latency data transfer but has high power consumption. Further work is required to establish the most effective technology for streaming between hearing-assistive and haptic devices.

An alternative to streaming audio from behind-the-ear devices is to mount microphones either on the haptic device or on another part of the body. A microphone mounted on a wrist- or hand-worn device might allow the user to direct the microphone toward a talker or other sound source of interest. However, arm movements, such as when walking or gesticulating, may lead to unwanted distortion of the audio signal. A newly released wrist-worn haptic device, the ‘Buzz’ (Neosensory, San Francisco, USA), has microphones mounted on top of the device. In informal real-world trials by the author and colleagues, this device was found to be frequently triggered by clothing moving against the device and to be highly susceptible to wind noise. It was also found to be excessively triggered by impulsive sounds, particularly when the hands manipulated objects during activities such as typing or cooking. These issues would be reduced or avoided by streaming audio from behind-the-ear devices, which use advanced techniques to suppress wind noise and impulsive sounds [Citation72]. A combination of microphones mounted on the device or body and on hearing-assistive devices might also be considered, particularly as having access to audio from microphones at multiple sites might aid noise reduction [e.g. Citation73,Citation74].

3.2. Signal processing

Once the audio has been received by the haptic device, the next consideration is how it should be processed. The first possible approach is not to process it at all, and to rely on the skin to extract the most important sound features [Citation27,Citation29,Citation39,Citation75]. One major limitation of this approach is that the skin is insensitive to vibration at frequencies higher than around 500 Hz [Citation76], where a large amount of speech energy resides [Citation77]. To overcome this issue, one tactile aid transposed sound at higher frequencies down to lower frequencies [Citation78]. Nonetheless, using this approach, important stimulus features are likely to be masked or to be impossible for the tactile system to extract [Citation79,Citation80].

Another approach is to extract key sound features from the audio signal and map them to the haptic signal. It is likely to be important to provide sound features that give frequency information, such as the fundamental frequency of the sound of interest. Hearing impairment almost always leads to a reduced ability to discriminate sounds at different frequencies [Citation81]. For CI users, frequency discrimination is typically particularly poor [Citation82]. This can impair talker age, sex, and accent identification [Citation83,Citation84] as well as perception of speech prosody, which allows listeners to distinguish emotions (e.g. anger from sadness), intention (e.g. sarcastic from sincere), statements from questions, and nouns from verbs (e.g. ‘object’ from ‘object’) [Citation85–88]. Frequency information is also critical to separating sounds that occur at the same time [Citation58,Citation89] and to music perception [Citation56]. One priority for haptic devices should therefore be provision of frequency information.

Another important feature is how sound changes in amplitude over time (the amplitude envelope). Hearing impairment almost always leads to a reduction in the dynamic range available to the listener (the difference between detection threshold and uncomfortably intense stimulation). The dynamic-range available to hearing-impaired listeners is typically around half that of normal-hearing listeners [Citation90]. The dynamic-range available for electrical stimulation in CI users, however, is around just an eighth of that for normal-hearing listeners [Citation91–93]. Ability to discriminate sounds at different intensities is also typically severely impaired in CI users [Citation94]. Encouragingly, the dynamic range for vibro-tactile stimulation is around four times the dynamic range available through electrical stimulation with a CI [Citation52]. The tactile system also has excellent intensity discrimination, which is comparable to that of the healthy auditory system [Citation52,Citation95–99] and is highly sensitive to amplitude envelope modulations at the frequencies that are most important for speech recognition [Citation100,Citation101]. A second priority for a haptic device should therefore be provision of amplitude envelope information.

In line with these priorities, many tactile aids that aimed to enhance lip-reading in deaf individuals extracted frequency or amplitude information [Citation36,Citation102]. Previous studies have compared speech recognition when providing the fundamental frequency or amplitude envelope of the speech through audio, either in isolation [Citation103] or in addition to CI-simulated audio [Citation104,Citation105]. Similar benefit to speech reception thresholds was found for each feature. However, the fundamental frequency provided more information about vowel duration and stress, whereas the amplitude envelope provided more information about consonant place, manner, and voicing [Citation103]. This is consistent with the finding that, while each feature provides similar overall benefit to speech-in-noise performance, the provision of both features together provides most benefit [Citation104].

Like tactile aids, recent studies showing benefit of EHS to speech-in-noise performance in CI users have also extracted frequency or amplitude information. Huang, Sheffield [Citation7] showed benefit to speech-in-noise performance in CI users by presenting the fundamental frequency through haptic stimulation. Changes in fundamental frequency were delivered through changes in the frequency of haptic stimulation on the fingertips. Besides the issues already discussed, regarding the stimulation site and the deriving of the haptic signal from clean speech, delivering information through changes in haptic stimulation frequency is likely to lead to information being lost due to the skin’s poor frequency resolution [Citation79]. One way that some devices have overcome this issue is by mapping frequency to location on the skin. A recent study used the newly developed mosaicOne_B device, which has an array of haptic stimulators on the forearm and uses a novel approach for mapping fundamental frequency to stimulation location [Citation11]. This device was shown to be highly effective at delivering fundamental-frequency information, and was robust to background noise. Future work should evaluate whether the mosaicOne_B can be used to enhance speech-in-noise performance.

Other researchers that have shown EHS can enhance speech-in-noise performance for CI users have primarily focused on providing speech amplitude-envelope information [Citation5,Citation6,Citation8]. In these studies, information was also provided about the relative sound energy across either four [Citation5,Citation8] or seven [Citation6] frequency bands, which were selected to contain substantial speech energy. Frequency and amplitude information were delivered through changes in the haptic stimulation intensity of tones focused within the frequency range where sensitivity is high. The frequency separation between these tones meant that they were expected to be individually discriminable. However, as argued above, it may have been possible to transfer more frequency information through a spatial, rather than frequency, mapping [Citation11]. The three studies that have shown improved speech-in-noise performance by providing amplitude envelope information through haptic stimulation have derived their haptic signal from speech in noise, rather than from clean speech [Citation5,Citation6,Citation8]. To do this, two of these studies [Citation5,Citation6] used a simple noise-reduction approach that relied on the speech signal being more intense than the background noise. This is adequate for enhancing speech-in-noise performance for CI users, who typically struggle even when speech is substantially louder than the background noise [Citation5,Citation8]. However, it may not be suitable for hearing aid users, who are typically able to follow speech in situations where the noise is louder than the speech [Citation4]. Future work should assess the effectiveness of more sophisticated methods for extracting signals in noise to widen the applicability of this approach [Citation106,Citation107].

Another important feature is sound location. In normal-hearing listeners, the origin of a sound is determined primarily by assessing differences in the intensity and arrival time between the ears. As previously discussed, highly accurate sound localization has been shown using haptic stimulation derived from audio received by behind-the-ear devices [Citation9,Citation10]. In this work, sounds were located using intensity differences across the wrists [Citation52], which matched the sound intensity differences across the ears. The differences in arrival time between the ears were also provided through haptic stimulation, but these differences were much smaller than can likely be discriminated by the tactile system [Citation108,Citation109]. Future work could explore methods for enhancing spatial-hearing cues to further improve haptic sound-localization [Citation110–112]. One approach that might be explored is to remap time difference cues to intensity differences so that they can be effectively extracted by the tactile system.

Any haptic signal-processing that is deployed must be computationally lightweight. This is to avoid incurring a delay in the arrival of the haptic signal that could disrupt binding of auditory, visual (e.g. lip reading), and haptic information. It will also be important for allowing the signal-processing unit to be compact and power efficient. There is encouraging evidence that a processing delay of tens of milliseconds may be acceptable, although there is insufficient evidence currently to establish this with confidence. One line of evidence comes from research studying the influence of haptic stimulation (air puffs) on the perception of aspirated and unaspirated syllables [Citation113]. In this work, it was found that the influence of haptic stimulation was not significantly reduced when haptic stimulation was delayed by up to 100 ms. Other work has shown evidence of ‘temporal recalibration’, where consistent delays of several tens of milliseconds between correlated sensory inputs are rapidly corrected for in the brain so that perceptual synchrony is retained [Citation114–117]. If haptic stimulation can be delayed from the audio and visual signal by tens of milliseconds, then this could allow for sophisticated signal-processing strategies to be implemented in haptic devices.

Another technology that might maximize the effectiveness of signal-processing regimes is low-latency data streaming between haptic devices. This could be achieved using radio or Bluetooth LE technology, which is discussed in the Audio signal acquisition section. One way in which streaming between devices may be important is for linking signal-processing that adjusts the signal intensity or delay, such as compressors, to avoid distortion of spatial hearing cues [Citation118].

3.3. Signal delivery

3.3.1. Stimulation method

Once the signal has been processed, the next consideration is how it should be delivered. Haptic stimulation has traditionally been delivered either through electro-tactile stimulation, whereby a current is passed through the skin, or vibro-tactile stimulation, whereby the skin is mechanically indented. The usable frequency and amplitude ranges for electro-tactile stimulation are substantially smaller than for vibro-tactile stimulation [Citation119–122]. Furthermore, because electro-tactile stimulation depends on the electrical resistance of the skin, it is strongly affected by its moisture content and by small changes in the stimulation location [Citation119,Citation120,Citation123]. Because of the limited frequency and amplitude range for electro-tactile stimulation, sound information has typically been delivered using arrays of electrical stimulators, with sound features mapped to changes in stimulation location and pulse rate [Citation124,Citation125]. Besides these limitations, there are also safety concerns with electrical stimulation that do not apply to vibro-tactile stimulation. Firstly, because the fingers have a lower electrical resistance than most other body parts, devices designed for other body parts must ensure that the electrical contacts cannot be touched by the user’s finger [Citation126]. Secondly, if mounted on the chest, electro-tactile devices may not be suitable for those with pacemakers. One advantage of using electrical stimulation is that it may require less power, and therefore allow a longer battery life for the device [Citation126]. However, given the limitations and additional safety considerations, vibro-tactile stimulation appears to be a more suitable stimulation method.

Recent developments in haptic motor and driver technology have made it possible for precisely controlled vibro-tactile stimulation to be delivered in compact devices at a low cost. Because of their higher power-efficiency, linear resonant actuators (which generate vibration through a voice coil moving a mass) may be preferred to eccentric rotating mass motors (which generate vibration through rotation of an unbalanced load). Piezoelectric motors also have high power-efficiency but are often expensive. The response latency and precision of waveform tracking for linear resonant actuators and eccentric rotating mass motors can be improved using overdrive and active-breaking techniques. Overdrive involves temporarily driving the motor above its rated voltage to reduce the time it takes to rise to its target intensity. Active breaking involves applying a reverse voltage to reduce the time the motor takes to fall to its target intensity. Application of these techniques using the latest haptic-driver technology may be important for achieving sufficiently precise speech amplitude-envelope tracking.

In addition to providing vibro-tactile stimulation, the Tactile and Squeeze Bracelet Interface (Tasbi), which was recently developed by Facebook Reality Labs, modulates the amount of pressure applied [Citation127]. This prototype device, which was developed to enhance interactions in virtual environments, has a tensioning mechanism that adjusts the amount of ‘squeeze’ as well as six linear resonant actuators spaced around the wrist. One way in which squeeze intensity could be used in a haptic device for the hearing impaired is to provide information about absolute sound intensity. This would allow vibro-tactile stimulation to be focused on providing detailed information about more subtle local amplitude changes. Squeeze feedback could also be effective for supporting music, film, and video games as it has been argued that it elicits emotional responses and is less attention demanding than vibro-tactile stimulation [Citation127–129].

3.3.2. Stimulation site

After establishing the most appropriate stimulation method, the stimulation site must then be considered. A suitable site will be sufficiently sensitive to allow sound information to be effectively transferred, whilst allowing easy device self-fitting, high comfort, and minimal disruption to common activities. Some recent studies have provided haptic stimulation to the fingertip [Citation6,Citation7,Citation130], because it is highly sensitive and contains a high density of tactile receptors [Citation131]. However, the fingertip does not seem an optimal site for real-world use as it is frequently involved in everyday tasks. An alternative site, also used in recent studies, is the wrist [Citation5,Citation8–10]. Although the wrist has higher vibro-tactile detection thresholds than the fingertip [Citation132] and a lower density of tactile receptors [Citation131], there is evidence that intensity discrimination is enhanced at the wrist compared to the fingertip, and that frequency discrimination and temporal-gap detection are similar [Citation132]. Moreover, the wrist would seem a practical site for a real-world application. Wrist-worn devices are familiar, esthetically unobtrusive, do not impede everyday tasks, and are easy to self-fit.

shows the mosaicOne_C, a wrist-worn haptic device for augmenting CI listening that is currently under development. Building on the approach used in the mosaicOne_B device [Citation11], which is worn on the forearm, fundamental frequency can be mapped to stimulation location around the wrist using four vibro-tactile motors. The perception of haptic stimulation can be created at a continuum of positions around the wrist by panning between the motors, which maximizes the resolution of the device. The Buzz, another wrist-worn haptic device for enhancing auditory perception, also has multiple motors arranged around the wrist. The precise signal-processing strategy used to convert audio to haptic stimulation is not in the public domain, but the Buzz does not map the fundamental frequency of a sound to a position on the wrist. Other multi-motor prototype wrist-worn devices have been developed for other applications, such as enhancing virtual and augmented reality (e.g. the Tasbi, discussed above), delivering more detailed notifications and alerts [Citation133], or improving color discrimination in color-blind people [Citation134].

Figure 1. Image of the mosaicOne_C wrist-worn haptic device currently under development as part of the Electro-Haptics Research Project (www.electrohaptics.co.uk) at the University of Southampton (UK). Four haptic motors are housed around a rubber wrist-strap. Image reproduced with permission of Samuel Perry and Mark Fletcher

Figure 1. Image of the mosaicOne_C wrist-worn haptic device currently under development as part of the Electro-Haptics Research Project (www.electrohaptics.co.uk) at the University of Southampton (UK). Four haptic motors are housed around a rubber wrist-strap. Image reproduced with permission of Samuel Perry and Mark Fletcher

One potential limitation of providing haptic stimulation at the wrist or finger is the frequent movements and changes in relative position during many activities. This could distort sound information, particularly if transmitted through differences in stimulation across the hands or wrists. This idea is supported by work showing that crossing the arms impairs temporal-order judgments for haptic stimulation across the hands [Citation135,Citation136], although it is not clear whether this can be overcome with training. Other evidence suggests that changes in relative arm position do not impair the perception of intensity difference cues, which are used for haptic sound-localization [Citation9,Citation10]. For example, one study found that haptic intensity perception on one hand was modulated by haptic stimulation on the other hand, but that this modulation did not depend on the relative positions of the hands [Citation137]. However, further work is required to properly assess the impact of body motion on the transfer of sound features through haptic stimulation.

Given the possibility that changes in the relative position of haptic devices might impair information transfer, sites whose relative positions are more fixed should be considered. Previously, tactile aids have been developed that provide stimulation on the sternum [Citation138], abdomen [Citation139], or back [Citation140]. Wilska [Citation141] compared the sensitivity of different sites. He found the sensitivity of the sternum to be quite similar to the wrist, the abdomen to have much lower sensitivity, and some areas of the back to be less sensitive than the wrist or sternum but substantially more sensitive than the abdomen. Other potential sites for haptic stimulation might be the biceps or feet. Like the back, these sites are less sensitive than the wrist or sternum but are more sensitive than the abdomen. While many of these candidate sites benefit from allowing devices to be discreet, some may raise difficulties for self-fitting or lead to uncomfortable feelings of restrictedness that were reported by some users of body-worn tactile aids.

For devices that map changes in stimulus features to changes in location of stimulation, it is also important to consider the spatial acuity of the tactile system at different sites. The ability to discriminate two spatially separate stimuli varies substantially across different parts of the body. For example, spatial acuity is high at the fingertip, is reduced on the forearm, and is reduced further still on the shoulders [Citation142]. It should be noted however, that there is more space available for across-site stimulation on the forearm and shoulder than on the fingertip. As well as careful selection of stimulation site, devices using spatial mapping of stimulus features should consider the decline in spatial acuity with age [Citation143], ensuring that motors are sufficiently spaced to retain performance in older populations.

3.4. Device specifications

Several additional specifications must be met if a haptic device is to be clinically successful. One important issue is power management. Hearing-assistive devices target a minimum battery life of 14 hours, so that a typical user (who sleeps for 8 hours each day) needs only to charge their device overnight. However, modern devices using lithium-ion batteries often last several days on a single charge. With careful power management and use of low-power motor (e.g. linear resonant actuators) and wireless (e.g. Bluetooth LE) technology, as well as computationally lightweight signal processing, a haptic device that meets the required battery-life is readily achievable. The Buzz wrist-worn haptic device, for example, can be continuously used for more than 24 hours with a single charge.

Other important considerations for haptic-device design are esthetic attractiveness, compactness, discreetness, and comfort. It will be important for any haptic device to be lightweight and have a small footprint, although the precise acceptable form-factor will no doubt be influenced by the amount of benefit the device gives. A compact and lightweight device can readily be produced using recently developed low-cost, compact motor and haptic-driver technology in combination with the battery, wireless, and signal-processing technology already implemented in hearing-assistive devices. A common complaint about tactile aids was that they highlighted that the user had a hearing impairment. This could be an issue for devices fitted at sites where they are likely to be visible, such as the wrist. However, given the current prevalence of smartwatches, a wrist-worn device with a sufficiently modern design (like that shown in ) may be acceptable.

Another important feature of any device will be ease of use for the patient and clinician. This will include already mentioned considerations, such as ease of self-fitting, but may also mean the inclusion of adjustable device settings through easy to use and understand buttons on the device or a linked smartphone app. It is also possible that device tuning, based on the user’s vibro-tactile detection and discomfort thresholds, will be required to maximize comfort and the dynamic range available to the device. To facilitate uptake, tuning routines for clinicians or users must be fast and intuitive. It is also possible that the optimal haptic signal-processing strategy will depend on the user’s hearing-assistive device type and programming. In this case, firmware updates that adjust the haptic signal-processing strategy could be sent from the hearing-assistive device when a new haptic device is paired with it. This would require either close collaboration between hearing-assistive and haptic device manufacturers, or for hearing-assistive device manufacturers to develop their own haptic devices. However, it is important to note that across a number of studies that have shown clear benefits of EHS for a range of CI devices, there was no individual tuning of haptic stimulation [Citation5,Citation7–9]. Furthermore, despite substantial variation in vibro-tactile detection thresholds, no correlation between the size of the benefit of EHS and detection threshold has been found [Citation5,Citation6,Citation8–10]. It is therefore possible that effective haptic devices might be developed that required little or no individual tuning.

Finally, additional features might be added to haptic devices to assist in daily life. For example, the device might connect to a range of smart devices within the Internet of Things to improve awareness and safety. These might include doorbells, telephones, baby monitors, ovens, and wake-up, intruder, fire, or carbon monoxide alarms. The effectiveness of some of these additional features will partially depend on the haptic device having a long battery-life or allowing easy switching of battery units.

4. Conclusion

Exciting new evidence has recently emerged showing that providing missing sound-information through haptic stimulation could be highly effective in augmenting hearing-assistive devices. This approach could also be used to aid the many millions of hearing-impaired people worldwide who cannot access hearing-assistive technology. So far, the approach has shown particular promise for CI users, for whom impressive improvements to speech-in-noise performance and spatial hearing have been demonstrated. These laboratory findings must now be reproduced in the real-world with a device that is appropriate for clinical use. The technology required to develop such a device is already available. However, a large amount of work remains to establish the best way to effectively acquire and process the audio signal, the optimal device configuration, and the most suitable stimulation site. Furthermore, an effective device will likely require the combining of cutting-edge motor, battery, microprocessor, and wireless communication technology. If this can be achieved, then such a device could provide a noninvasive, low-cost means of substantially improving outcomes for hearing-impaired listeners.

5. Expert opinion

It is predicted that the number of people with a disabling hearing loss will nearly double in the next 30 years [Citation20]. There is therefore a rapidly growing population that could potentially benefit from the use of haptic stimulation to provide auditory information. It seems likely that haptics can provide most benefit to those with severe-to-profound hearing impairments, who either have CIs or would be CI candidates. For those fortunate enough to have access to CIs, an effective haptic device could significantly increase spatial awareness and the ability to hear in noisy environments. It could also offer an inexpensive means to acquire the benefits of a second CI without the need for an expensive second surgery. This could substantially reduce costs for individuals and healthcare services. However, many people across the world cannot access facilities for implanting a CI or providing a hearing aid, with cost being a major prohibitive factor. In India, for example, the cost of getting a CI is several times the personal average income [Citation17], making them unaffordable for the majority of candidates. For these people, an effective haptic device might offer an affordable means of recovering critical access to the auditory world. This could allow children and adults far greater access to education, work, and leisure and thereby substantially their improve quality of life.

Currently, the main barrier to uptake of this approach is the absence of an effective, clinically approved haptic device. If an effective device was available that was inexpensive, comfortable, discreet, easy for the user to self-fit, and easy for the clinician to tune to the individual, then it is difficult to see significant barriers to uptake. Substantial work remains, however, to establish the optimal signal-processing strategy and device configuration to maximize benefit for both hearing-assistive device users and those who cannot access hearing-assistive technology. There are also significant challenges ahead in designing and manufacturing a suitable haptic device, carrying out carefully controlled large-scale real-world trials, and obtaining clinical approval. All these challenges, however, can be met.

Within the next five years, a significant expansion in the number of researchers working in this area is anticipated. As the field grows, the range of outcome measures used to assess the benefits of haptic stimulation to hearing is also expected to increase. For example, it will likely soon be understood whether haptic stimulation can be used to reduce listening effort and improve access to speech prosody. Advanced neuroimaging methods, such as near-infrared spectroscopy and electroencephalography, will also likely be deployed so that the underlying mechanisms behind haptic enhancement of hearing can be understood. The biggest development, however, is expected to be the production of an effective device and the translation from laboratory testing to real-world trials. To develop such a device will require the bringing together of several cutting-edge technologies. This technology will likely include 3D-printing, compact power-cells, low-latency data streaming, microprocessors, haptic drivers, and micro-motors. It will be critical for clinicians, engineers, researchers, and industry to work closely together. By doing this, it seems likely that, within the next five years, we will see a clinically approved haptic device to enhance auditory perception in hearing-impaired listeners.

Article highlights

  • Recent studies have shown compelling evidence that haptic stimulation can be used to enhance spatial hearing and speech-in-noise performance for cochlear implant users. Haptic stimulation might also have utility for hearing aid users, particularly for improving spatial hearing, as well as for those without access to hearing-assistive devices.

  • Laboratory studies are required to establish the limits of this approach, such as how much delay there can be between the audio and haptic signal before the benefits of haptic stimulation decrease. These studies will be critical for informing haptic device design.

  • Significant questions remain regarding how best to acquire the audio signal that is converted to haptic stimulation, how best to process and deliver the haptic signal, and the precise specification for a successful device.

  • The technology required to develop a device that meets the anticipated requirements already exists.

  • Experiments have so far been confined to the laboratory and field trials are required to fully establish the efficacy of the approach.

Declaration of interest

M Fletcher is employed by the University of Southampton but his salary is funded by the William Demant Foundation. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.

Reviewer disclosures

Peer reviewers on this manuscript have no relevant financial or other relationships to disclose.

Acknowledgments

My sincerest thanks to Carl Verschuur for his continued support, to Beeps Perry for the many insightful discussions, to Andriano Brookmani for help with the manuscript text, and to Alex and Helen Fletcher for their patience and support during the writing of this manuscript.

Additional information

Funding

This paper was funded by William Demant Foundation, Kongebakken 92765, Smørum.

Notes

1. Other researchers have used the term ‘electro-tactile stimulation’. The term ‘electro-haptic stimulation’ is preferred as electro-tactile stimulation is commonly used to refer to electrical stimulation of the skin, rather than to using tactile stimulation to augment CI listening.

References

  • Zeng FG, Rebscher S, Harrison W, et al. Cochlear implants: system design, integration, and evaluation. IEEE Rev Biomed Eng. 2008;1:115–142.
  • Spriet A, Van Deun L, Eftaxiadis K, et al. Speech understanding in background noise with the two-microphone adaptive beamformer BEAM in the nucleus freedom cochlear implant system. Ear Hear. 2007 Feb;28(1):62–72.
  • Dorman MF, Loiselle LH, Cook SJ, et al. Sound source localization by normal-hearing listeners, hearing-impaired listeners and cochlear implant listeners. Audiol Neurootol. 2016 Apr;21(3):127–131.
  • Miller CW, Bates E, Brennan M. The effects of frequency lowering on speech perception in noise with adult hearing-aid users. Int J Audiol. 2016 Mar;55(5):305–312.
  • Fletcher MD, Hadeedi A, Goehring T, et al. Electro-haptic enhancement of speech-in-noise performance in cochlear implant users. Sci Rep. 2019 Aug;9(1):11428.
  • Fletcher MD, Mills SR, Goehring T. Vibro-tactile enhancement of speech intelligibility in multi-talker noise for simulated cochlear implant listening. Trends Hear. 2018 Jan;22:1–11.
  • Huang J, Sheffield B, Lin P, et al. Electro-tactile stimulation enhances cochlear implant speech recognition in noise. Sci Rep. 2017 May;7(1):2196.
  • Fletcher MD, Song H, Perry SW. Electro-haptic stimulation enhances speech recognition in spatially separated noise for cochlear implant users. Sci Rep. 2020 Jul;10(1):12723.
  • Fletcher MD, Cunningham RO, Mills SR. Electro-haptic enhancement of spatial hearing in cochlear implant users. Sci Rep. 2020 Jan;10(1):1621.
  • Fletcher MD, Zgheib J. Haptic sound-localisation for use in cochlear implant and hearing-aid users. Sci Rep. 2020 Aug;10(1):14171.
  • Fletcher MD, Thini N, Perry SW. Enhanced pitch discrimination for cochlear implant users with a new haptic neuroprosthetic. Sci Rep. 2020 Jun;10(1):10354.
  • Zeng FG. Cochlear implants: why don’t more people use them? Hear J. 2007;60(3):48–49.
  • Ding X, Tian H, Wang W, et al. Cochlear implantation in China: review of 1237 cases with an emphasis on complications. ORL J Otorhinolaryngol Relat Spec. 2009;71(4):192–195.
  • Khan MI, Mukhtar N, Saeed SR, et al. The Pakistan (Lahore) cochlear implant programme: issues relating to implantation in a developing country. J Laryngol Otol. 2007 Aug;121(8):745–750.
  • Bodington E, Saeed SR, Smith MCF, et al. A narrative review of the logistic and economic feasibility of cochlear implants in lower-income countries. Cochlear Implants Int. 2020;16:1–10.
  • Farinetti A, Ben Gharbia D, Mancini J, et al. Cochlear implant complications in 403 patients: comparative study of adults and children and review of the literature. Eur Ann Otorhinolaryngol Head Neck Dis. 2014 Jun;131(3):177–182.
  • Krishnamoorthy K, Samy RN, Shoman N. The challenges of starting a cochlear implant programme in a developing country. Curr Opin Otolaryngol Head Neck Surg. 2014 Oct;22(5):367–372.
  • Kushalnagar P, Mathur G, Moreland CJ, et al. Infants and children with hearing loss need early language access. J Clin Ethics. 2010 Apr;21(2):143–154.
  • Ouellet C, Cohen H. Speech and language development following cochlear implantation. J Neurosci. 1999;12(3–4):271–288.
  • Organization WH. Deafness and hearing loss [cited 2020 Aug 20]. Available from: https://www.who.int/mediacentre/factsheets/fs300/en/
  • Bess FH, Dodd-Murphy J, Parker RA. Children with minimal sensorineural hearing loss: prevalence, educational performance, and functional status. Ear Hear. 1998 Oct;19(5):339–354.
  • Lieu JE. Speech-language and educational consequences of unilateral hearing loss in children. Arch Otolaryngol Head Neck Surg. 2004 May;130(5):524–530.
  • Olusanya BO, Newton VE. Global burden of childhood hearing impairment and disease control priorities for developing countries. Lancet. 2007 Apr 14;369(9569):1314–1317.
  • Tucci D, Merson MH, Wilson BS. A summary of the literature on global hearing impairment: current status and priorities for action. Otol Neurotol. 2010 Jan;31(1):31–41.
  • Olusanya BO, Neumann KJ, Saunders JE. The global burden of disabling hearing impairment: a call to action. Bull World Health Organ. 2014 May 1;92(5):367–373.
  • Goodfellow LD. Experiments on the senses of touch and vibration. J Acoust Soc Am. 1934;6(1):45–50.
  • Gault RH. Touch as a substitute for hearing in the interpretation and control of speech. Arch Otolaryngol. 1926;3(2):121–135.
  • Gault RH. On the effect of simultaneous tactual-visual stimulation in relation to the interpretation of speech. J Soc Psychol. 1930;24(4):498–517.
  • Gault RH. Progress in experiments on tactile interpretation of oral speech. J Soc Psychol. 1924;19:155–159.
  • Bach-y-Rita P. Tactile sensory substitution studies. Ann N Y Acad Sci. 2004 May;1013(1):83–91.
  • Bach-y-Rita P. Brain mechanisms in sensory substitution. New York: Academic Press; 1972.
  • Bach-y-Rita P, Collins CC, Saunders FA, et al. Vision substitution by tactile image projection. Nature. 1969 Mar 8;221(5184):963–964.
  • Bach-y-Rita P, Tyler ME, Kaczmarek KA. Seeing with the brain. Int J Hum Comput. 2003 Nov;15(2):285–295.
  • Kishon-Rabin L, Boothroyd A, Hanin L. Speechreading enhancement: a comparison of spatial-tactile display of voice fundamental frequency (F-0) with auditory F-0. J Acoust Soc Am. 1996 Jul;100(1):593–602.
  • Plant G. The selection and training of tactile aid users. In: Summers IR editor. Tactile aids for the hearing impaired.Whurr, London; 1992:146–166.
  • Brooks PL, Frost BJ. Evaluation of a tactile vocoder for word recognition. J Acoust Soc Am. 1983 March;74(1):34–39.
  • Brooks PL, Frost BJ, Mason JL, et al. Acquisition of a 250-word vocabulary through a tactile vocoder. J Acoust Soc Am. 1985 May;77(4):1576–1579.
  • Thornton ARD, Phillips AJ. A comparative trial of four vibrotactile aids. In: Summers IR, editor. Tactile Aids for the hearing impaired. Whurr Publishers, London; 1992. p. 231–251.
  • Drullman R, Bronkhorst AW. Speech perception and talker segregation: effects of level, pitch, and tactile support with multiple simultaneous talkers. J Acoust Soc Am. 2004 Nov;116(5):3090–3098.
  • Peters BR, Wyss J, Manrique M. Worldwide trends in bilateral cochlear implantation. Laryngoscope. 2010 May;120(Suppl 2):17–44.
  • RJM VH, Tyler RS. Speech perception, localization, and lateralization with bilateral cochlear implants. J Acoust Soc Am. 2003 Mar;113(3):1617–1630.
  • Litovsky RY, Parkinson A, Arcaroli J. Spatial hearing and speech intelligibility in bilateral cochlear implant users. Ear Hear. 2009 Aug;30(4):419–431.
  • Finbow J, Bance M, Aiken S, et al. A comparison between wireless CROS and bone-anchored hearing devices for single-sided deafness: a pilot study. Otol Neurotol. 2015 Jun;36(5):819–825.
  • Grewal AS, Kuthubutheen J, Smilsky K, et al. The role of a new contralateral routing of signal microphone in established unilateral cochlear implant recipients. Laryngoscope. 2015 Jan;125(1):197–202.
  • Kurien G, Hwang E, Smilsky K, et al. The benefit of a wireless contralateral routing of signals (CROS) microphone in unilateral cochlear implant recipients. Otol Neurotol. 2019 Feb;40(2):82–88.
  • Dorman MF, Natale SC, Agrawal S. The value of unilateral CIs, CI-CROS and bilateral CIs, with and without beamformer microphones, for speech understanding in a simulation of a restaurant environment. Audiol Neurootol. 2018 Dec;23(5):270–276.
  • Gescheider GA. Some comparisons between touch and hearing. IEEE Trans Hum Mach. 1970 Mar;11(1):28–35.
  • Frost BJ, Richardson BL. Tactile localization of sounds - acuity, tracking moving sources, and selective attention. J Acoust Soc Am. 1976 Apr;59(4):907–914.
  • Richardson BL, Frost BJ. Tactile localization of the direction and distance of sounds. Percept Psychophys. 1979 Apr;25(4):336–344.
  • Richardson BL, Wuillemin DB, Saunders FJ. Tactile discrimination of competing sounds. Percept Psychophys. 1978 Dec;24(6):546–550.
  • Gault RH. Recent developments in vibro-tactile research. J Franklin Inst. 1936 Jun;221(6):703–719.
  • Fletcher MD, Zgheib J, Perry SW. Sensitivity to haptic sound-localisation cues. Sci Rep. 2020.
  • Aronoff JM, Yoon YS, Freed DJ, et al. The use of interaural time and level difference cues by bilateral cochlear implant users. J Acoust Soc Am. 2010 Mar;127(3):87–92.
  • Dunn CC, Perreau A, Gantz B, et al. Benefits of localization and speech perception with multiple noise sources in listeners with a short-electrode cochlear implant. J Am Acad Audiol. 2010 Jan;21(1):44–51.
  • Verschuur CA, Lutman ME, Ramsden R, et al. Auditory localization abilities in bilateral cochlear implant recipients. Otol Neurotol. 2005 Sep;26(5):965–971.
  • McDermott HJ. Music perception with cochlear implants: a review. Trends Amplif. 2004 Mar;8(2):49–82.
  • Brockmeier SJ, Fitzgerald D, Searle O, et al. The MuSIC perception test: a novel battery for testing music perception of cochlear implant users. Cochlear Implants Int. 2011 Feb;12(1):10–20.
  • Roberts B, Brunstrom JM. Perceptual segregation and pitch shifts of mistuned components in harmonic complexes and in regular inharmonic complexes. J Acoust Soc Am. 1998 Oct;104(4):2326–2338.
  • Huang J, Lu T, Sheffield B, et al. Electro-tactile stimulation enhances cochlear-implant melody recognition: effects of rhythm and musical training. Ear Hear. 2019 Oct;41(1):160.
  • Luo X, Hayes L. Vibrotactile stimulation based on the fundamental frequency can improve melodic contour identification of normal-hearing listeners with a 4-channel cochlear implant simulation. Front Neurosci. 2019 Oct;13:1145.
  • Drennan WR, Oleson JJ, Gfeller K, et al. Clinical evaluation of music perception, appraisal and experience in cochlear implant users. Int J Audiol. 2015 Feb;54(2):114–123.
  • Kang R, Nimmons GL, Drennan W, et al. Development and validation of the university of washington clinical assessment of music perception test. Ear Hear. 2009 Aug;30(4):411–418.
  • Bleidt R, Borsum A, Fuchs H, et al. Object-based audio: opportunities for improved listening experience and increased listener involvement. SMPTE Motion Imaging J. 2015 Oct;124(5):1–13.
  • Ward LA, Shirley BG. Personalization in object-based audio for accessibility: A review of advancements for hearing impaired listeners. J Audio Eng Soc. 2019 Jun;67(7/8):584–597.
  • Peterson PM, Wei SM, Rabinowitz WM, et al. Robustness of an adaptive beamforming method for hearing aids. Acta Otolaryngol Suppl. 1990;469(sup469):85–90.
  • Dorman MF, Gifford RH. Speech understanding in complex listening environments by listeners fit with cochlear implants. J Speech Lang Hear Res. 2017 Oct;60(10):3019–3026.
  • Burr D, Silva O, Cicchini GM, et al. Temporal mechanisms of multimodal binding. Proc R Soc B Biol Sci. 2009 May 22;276(1663):1761–1769.
  • Ernst MO, Bulthoff HH. Merging the senses into a robust percept. Trends Cogn Sci. 2004 Apr;8(4):162–169.
  • Fujisaki W, Nishida S. Temporal frequency characteristics of synchrony-asynchrony discrimination of audio-visual signals. Exp Brain Res. 2005 Oct;166(3–4):455–464.
  • Parise CV, Ernst MO. Correlation detection as a general mechanism for multisensory integration. Nat Commun. 2016 Jun;7(1):11543.
  • Parise CV, Spence C, Ernst MO. When correlation implies causation in multisensory integration. Curr Biol. 2012 Jan 10;22(1):46–49.
  • Launer S, Zakis JA, Moore BCJ. Hearing aid signal processing. Vol. 56. Cham: Springer; 2016.
  • Chen JD, Benesty J, Huang Y. A minimum distortion noise reduction algorithm with multiple microphones. IEEE Trans Audio Speech. 2008 Mar;16(3):481–493.
  • Szurley J, Bertrand A, Van Dijk B, et al. Binaural noise cue preservation in a binaural noise reduction system with a remote microphone signal. IEEE ACM Audio Speech. 2016 May;24(5):952–966.
  • Schulte K. Fonator system: speech stimulation and speech feedback by technically amplified one-channel vibrations. In: editor, Fant G. International symposium on speech communication ability and profound deafness. Vol. 36. Washington: AG Bell Association; 1972. p. 351-353.
  • Verrillo RT. Change in vibrotactile thresholds as a function of age. Sens Process. 1979 3;3(1):49–59.
  • Byrne D, Dillon H, Tran K, et al. An international comparison of long-term average speech spectra. J Acoust Soc Am. 1994 Oct;96(4):2108–2120.
  • Weisenberger JM. Evaluation of the siemens minifonator vibrotactile aid. J Speech Hear Res. 1989 Mar;32(1):24–32.
  • Goff GD. Differential discrimination of frequency of cutaneous mechanical vibration. J Exp Psychol. 1967 Jun;74(2):294–299.
  • Rothenberg M, Verrillo RT, Zahorian SA, et al. Vibrotactile frequency for encoding a speech parameter. J Acoust Soc Am. 1977 Oct;62(4):1003–1012.
  • Tyler RS. Frequency resolution in hearing impaired listeners. In: Moore BCJ, editor. Frequency selectivity in hearing. London: Academic; 1986. p. 309-371.
  • Pretorius LL, Hanekom JJ. Free field frequency discrimination abilities of cochlear implant users. Hear Res. 2008 Oct;244(1–2):77–84.
  • Abberton E, Fourcin AJ. Intonation and speaker identification. Lang Speech. 1978 Dec;21(4):305–318.
  • Titze IR. Physiologic and acoustic differences between male and female voices. J Acoust Soc Am. 1989 Apr;85(4):1699–1707.
  • Most T, Peled M. Perception of suprasegmental features of speech by children with cochlear implants and children with hearing aids. J Deaf Stud Deaf Educ. 2007 May;12(3):350–361.
  • Peng SC, Tomblin JB, Turner CW. Production and perception of speech intonation in pediatric cochlear implant recipients and individuals with normal hearing. Ear Hear. 2008 Jun;29(3):336–351.
  • Meister H, Landwehr M, Pyschny V, et al. The perception of prosody and speaker gender in normal-hearing listeners and cochlear implant recipients. Int J Audiol. 2009 Jul;48(1):38–48.
  • Xin L, Fu QJ, Galvin JJ 3rd. Vocal emotion recognition by normal-hearing listeners and cochlear implant users. Trends Amplif. 2007 Dec;11(4):301–315.
  • Roberts B, Brunstrom JM. Perceptual fusion and fragmentation of complex tones made inharmonic by applying different degrees of frequency shift and spectral stretch. J Acoust Soc Am. 2001 Nov;110(5):2479–2490.
  • Pascoe DP. Clinical measurements of the auditory dynamic range and their relation to formulas for hearing aid gain. In: Jensen JH, editor. Hearing aid fitting, 13th danavox symposium. 1988: 129–152.
  • Zeng FG, Galvin JJ 3rd. Amplitude mapping and phoneme recognition in cochlear implant listeners. Ear Hear. 1999 Feb;20(1):60–74.
  • Zeng FG, Grant G, Niparko J, et al. Speech dynamic range and its effect on cochlear implant performance. J Acoust Soc Am. 2002 Jan;111(1):377–386.
  • Skinner MW, Holden LK, Holden TA, et al. Speech recognition at simulated soft, conversational, and raised-to-loud vocal efforts by adults with cochlear implants. J Acoust Soc Am. 1997 Jun;101(6):3766–3782.
  • Galvin JJ 3rd, Fu QJ. Influence of stimulation rate and loudness growth on modulation detection and intensity discrimination in cochlear implant users. Hear Res. 2009 Apr;250(1–2):46–54.
  • Gescheider GA, Zwislocki JJ, Rasmussen A. Effects of stimulus duration on the amplitude difference limen for vibrotaction. J Acoust Soc Am. 1996 Oct;100(4 Pt 1):2312–2319.
  • Craig JC. Difference threshold for intensity of tactile stimuli. Percept Psychophys. 1972 Mar;11(2):150–152.
  • Harris J. Loudness discrimination. J Speech Hear Dis. 1963; 18–23.
  • Penner MJ, Leshowitz B, Cudahy E, et al. Intensity discrimination for pulsed sinusoids of various frequencies. Percept Psychophys. 1974 May;15(3):568–570.
  • Florentine M, Buus S, Mason CR. Level discrimination as a function of level for tones from 0.25 to 16-kHz. J Acoust Soc Am. 1987 May;81(5):1528–1541.
  • Weisenberger JM. Sensitivity to amplitude-modulated vibrotactile signals. J Acoust Soc Am. 1986 Dec;80(6):1707–1715.
  • Drullman R, Festen JM, Plomp R. Effect of temporal envelope smearing on speech reception. J Acoust Soc Am. 1994 Feb;95(2):1053–1064.
  • Van Tasell DJ, Soli SD, Kirby VM, et al. Speech waveform envelope cues for consonant recognition. J Acoust Soc Am. 1987 Oct;82(4):1152–1161.
  • Summers IR, Gratton DA. Choice of speech features for tactile presentation to the profoundly deaf. IEEE Trans Rehabil Eng. 1995 3;Mar(1):117–121.
  • Brown CA, Bacon SP. Low-frequency speech cues and simulated electric-acoustic hearing. J Acoust Soc Am. 2009 Mar;125(3):1658–1665.
  • Kong YY, Carlyon RP. Improved speech recognition in noise in simulated binaurally combined acoustic and electric stimulation. J Acoust Soc Am. 2007 Jun;121(6):3717–3727.
  • Lai YH, Tsao Y, Lu X, et al. Deep learning-based noise reduction approach to improve speech intelligibility for cochlear implant recipients. Ear Hear. 2018 Aug;39(4):795–809.
  • Goehring T, Bolner F, Monaghan JJ, et al. Speech enhancement based on neural networks improves speech intelligibility in noise for cochlear implant users. Hear Res. 2017 Feb;344:183–194.
  • Geffen G, Rosa V, Luciano M. Effects of preferred hand and sex on the perception of tactile simultaneity. J Clin Exp Neuropsychol. 2000 Apr;22(2):219–231.
  • Klump RG, Eady HR. Some measurements of interaural time difference thresholds. J Acoust Soc Am. 1956 Jun;28(5):859–860.
  • Francart T, Lenssen A, Wouters J. Enhancement of interaural level differences improves sound localization in bimodal hearing. J Acoust Soc Am. 2011 Nov;130(5):2817–2826.
  • Pirhosseinloo S, Kokkinakis K. An interaural magnification algorithm for enhancement of naturally-occurring level differences. Interspeech: San Francisco, USA. 2016. p. 2558–2561.
  • Williges B, Jurgens T, Hu H, et al. Coherent coding of enhanced interaural cues improves sound localization in noise with bilateral cochlear implants. Trends Hear. 2018;22:1-18.
  • Gick B, Ikegami Y, Derrick D. The temporal window of audio-tactile integration in speech perception. J Acoust Soc Am. 2010 Nov;128(5):342–346.
  • Fujisaki W, Shimojo S, Kashino M, et al. Recalibration of audiovisual simultaneity. Nat Neurosci. 2004 7; Jul(7): 773–778.
  • Keetels M, Vroomen J. Temporal recalibration to tactile-visual asynchronous stimuli. Neurosci Lett. 2008 Jan 10;430(2):130–134.
  • Navarra J, Soto-Faraco S, Spence C. Adaptation to audiotactile asynchrony. Neurosci Lett. 2007 Feb 8;413(1):72–76.
  • Van der Burg E, Alais D, Cass J. Rapid recalibration to audiovisual asynchrony. J Neurosci. 2013 Sep;33(37):14633–14637.
  • Wiggins IM, Seeber BU. Linking dynamic-range compression across the ears can improve speech intelligibility in spatially separated noise. J Acoust Soc Am. 2013 Feb;133(2):1004–1016.
  • Demain S, Metcalf CD, Merrett GV, et al. A narrative review on haptic devices: relating the physiology and psychophysical properties of the hand to devices for rehabilitation in central nervous system disorders. Disabil Rehabil Assist Technol. 2013 May;8(3):181–189.
  • Kaczmarek KA, Webster JG, Bachyrita P, et al. Electrotactile and vibrotactile displays for sensory substitution systems. Ieee T Bio-Med Eng. 1991 Jan;38(1):1–16.
  • Dodgson GS, Brown BH, Freeston IL, et al. Electrical-stimulation at the wrist as an aid for the profoundly deaf. Clin Phys Physiol Meas. 1983 Nov;4(4):403–416.
  • Summers L. Signal processing strategies for single-channel systems. In: Summers IR, editor. Tactile aids for the hearing impaired. Whurr Publishers, London; 1992. pp. 110–127.
  • Peurala SH, Pitkanen K, Sivenius J, et al. Cutaneous electrical stimulation may enhance sensorimotor recovery in chronic stroke. Clin Rehabil. 2002 Nov;16(7):709–716.
  • Saunders FA. Electrocutaneous displays. In: Geldard FA, editor. Cutaneous communication systems and devices. Austin, TX: The Psychonomic Society; 1973. p. 20–26.
  • Sparks DW, Ardell LA, Bourgeois M, et al. Investigating the MESA (multipoint electrotactile speech aid): the transmission of connected discourse. J Acoust Soc Am. 1979 Mar;65(3):810–815.
  • Brown BH, Stevens JC. Electrical stimulation of the skin. In: Summers IR editor. Tactile aids for the hearing impaired. Whurr, London; 1992: 37–56.
  • Pezent E, Israr A, Samad M, et al. Tasbi: multisensory squeeze and vibrotactile wrist haptics for augmented and virtual reality. World Haptics, Facebook Research, Tokyo, Japan.; 2019.
  • Tsetserukou D. HaptiHug: a novel haptic display for communication of hug over a distance. In: editors, JBF VE, Bergmann Tiest WM, FCT VDH. Haptics: generating and perceiving tangible sensations. eurohaptics 2010. lecture notes in computer science. Vol. 6191. Berlin, Heidelberg: Springer; 2010. p. 340–347.
  • Zheng Y, Morrell JB, editors. Haptic actuator design parameters that influence affect and attention. IEEE Haptics Symposium; Vancouver; 2012.
  • Ciesla K, Wolak T, Lorens A, et al. Immediate improvement of speech-in-noise perception through multisensory stimulation via an auditory to tactile sensory substitution. Restor Neurol Neurosci. 2019;37(2):155–166.
  • Johansson RS, Vallbo AB. Tactile sensibility in the human hand: relative and absolute densities of four types of mechanoreceptive units in glabrous skin. J Physiol. 1979 Jan;286(1):283–300.
  • Summers L, Whybrow JJ, Gratton DA, et al. Tactile information transfer: a comparison of two stimulation sites. J Acoust Soc Am. 2005 Oct;118(4):2527–2534.
  • Matscheko M, Ferscha A, Riener A, et al. Tactor placement in wrist worn wearables. IEEE Int Sym Wrbl Co. 2010.
  • Carcedo MG, Chua SH, Perrault S, et al. HaptiColor: interpolating color information as haptic feedback to assist the colorblind. CHI Conference on Human Factors in Computing Systems. San Jose, California, USA: Association for Computing Machinery; 2016. p. 3572–3583.
  • Yamamoto S, Kitazawa S. Reversal of subjective temporal order due to arm crossing. Nat Neurosci. 2001 Jul;4(7):759–765.
  • Shore DI, Spry E, Spence C. Confusing the mind by crossing the hands. Cogn Brain Res. 2002 Jun;14(1):153–163.
  • Rahman MS, Yau JM. Somatosensory interactions reveal feature-dependent computations. J Neurophysiol. 2019 Jul 1;122(1):5–21.
  • Blamey PJ, Clark GM. A wearable multiple-electrode electrotactile speech processor for the profoundly deaf. J Acoust Soc Am. 1985 Apr;77(4):1619–1620.
  • Sparks DW, Kuhl PK, Edmonds AE, et al. Investigating the MESA (multipoint electrotactile speech aid): the transmission of segmental features of speech. J Acoust Soc Am. 1978 Jan;63(1):246–257.
  • Novich SD, Eagleman DM. Using space and time to encode vibrotactile information: toward an estimate of the skin’s achievable throughput. Exp Brain Res. 2015 Oct;233(10):2777–2788.
  • Wilska A. On the vibrational sensitivity in different regions of the body surface. Acta Physiol Scand. 1954 Jul 18;31(2–3):284–289.
  • Mancini F, Bauleo A, Cole J, et al. Whole-body mapping of spatial acuity for pain and touch. Ann Neurol. 2014 Jun;75(6):917–924.
  • Leveque JL, Dresler J, Ribot-Ciscar E, et al. Changes in tactile spatial discrimination and cutaneous coding properties by skin hydration in the elderly. J Invest Dermatol. 2000 Sep;115(3):454–458.