1,123
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

The effect of an active transcutaneous bone conduction device on spatial release from masking

ORCID Icon, ORCID Icon & ORCID Icon
Pages 348-359 | Received 29 Oct 2018, Accepted 10 Dec 2019, Published online: 24 Dec 2019

Abstract

Objective: The aim was to quantify the effect of the experimental active transcutaneous Bone Conduction Implant (BCI) on spatial release from masking (SRM) in subjects with bilateral or unilateral conductive and mixed hearing loss.

Design: Measurements were performed in a sound booth with five loudspeakers at 0°, +/−30° and +/−150° azimuth. Target speech was presented frontally, and interfering speech from either the front (co-located) or surrounding (separated) loudspeakers. SRM was calculated as the difference between the separated and the co-located speech recognition threshold (SRT).

Study Sample: Twelve patients (aged 22–76 years) unilaterally implanted with the BCI were included.

Results: A positive SRM, reflecting a benefit of spatially separating interferers from target speech, existed for all subjects in unaided condition, and for nine subjects (75%) in aided condition. Aided SRM was lower compared to unaided in nine of the subjects. There was no difference in SRM between patients with bilateral and unilateral hearing loss. In aided situation, SRT improved only for patients with bilateral hearing loss.

Conclusions: The BCI fitted unilaterally in patients with bilateral or unilateral conductive/mixed hearing loss seems to reduce SRM. However, data indicates that SRT is improved or maintained for patients with bilateral and unilateral hearing loss, respectively.

Introduction

The BCI (Bone Conduction Implant) is an active transcutaneous bone conduction device (BCD), where the transducer is implanted directly on the skull bone, and the external audio-processor is magnetically retained over intact skin (Håkansson et al. Citation2010; Taghavi et al. Citation2015). The results from the ongoing clinical study have shown so far improved performance in basic tone and speech audiometry tests (Eeg-Olofsson et al. Citation2014; Reinfeldt, Håkansson, Taghavi, Freden Jansson, et al. Citation2015). The effect of the BCI during more complex tasks is currently under investigation, with focus on sound localisation in the study by Asp and Reinfeldt (Citation2018), and on speech recognition in a multi-talker cocktail party scenario in the current study.

The cocktail party situation (Cherry Citation1953; Cherry and Taylor Citation1954) is a well-known example of a challenging scenario for the auditory system, where the listener is surrounded by several talkers while trying to focus only on one of them. In normal listeners, the ability to resolve such a situation is remarkably efficient, and several contributing factors have been identified (Bronkhorst Citation2000; Bronkhorst and Plomp Citation1992; Noble, Byrne, and Ter-Horst Citation1997; Peissig and Kollmeier Citation1997). One key phenomenon contributing to efficient speech recognition in a cocktail party set-up is spatial release from masking (SRM): when target speech and maskers come from spatially separated sound sources, the speech recognition improves as compared to when all the sound sources are co-located. Three main contributions to SRM can be identified: monoaural, binaural and cognitive.

The monaural contribution is mostly given by spectral cues originating from reflections at the pinna, head, and torso, and by the better ear effect, referring to the difference in signal to noise ratio (SNR) at both ears facilitated by the shadowing effect of the head. Given a frontal target and an asymmetrical distribution of the interferers on the azimuth plane, the SNR at one of the ears will be greater than the SNR at the other ear, allowing the listener to take advantage of the favourable side. The better ear effect can improve speech recognition in presence of spatially separated maskers of up to 10 dB (Bronkhorst and Plomp Citation1988; Freyman et al. Citation1999). The auditory system is also capable of rapidly identifying the side with the most favourable listening condition (higher SNR), and to switch between the two ears accordingly. This ability gives rise to a better ear glimpsing strategy (Brungart and Simpson Citation2002), able to maximise the speech recognition in asymmetric as well as symmetric target-maskers configuration (Lingner et al. Citation2016).

Binaural cues are thought to be the main contributors in symmetric distributions of target-interferers sources, where a SRM of 5–12 dB has been observed (Lingner et al. Citation2016; Marrone, Mason, and Kidd Citation2008b). More specifically, the key information is carried by ITD (interaural time difference) and ILD (interaural level difference), i.e. the differences in time and level of the signals reaching the two ears (Grothe, Pecka, and McAlpine Citation2010). ITD is effective for frequencies below 1500 Hz (Litovsky Citation2015), while ILD contributes mainly at higher frequencies.

Cognitive factors include higher level processes in the auditory system, e.g. selective or switching attention. One decisive factor is the amount of informational masking, i.e. how similar the target and the interfering signals are to each other. Unlike energetic masking, which arises in peripheral auditory system (basilar membrane and auditory nerve), informational masking takes place in the central system, and therefore relates not only to objective characteristics of the stimuli, such as fundamental frequency, vocal tract size, and accent, but also to attentional phenomena, e.g. level of concentration and cognitive ability, and daily training (Swaminathan et al. Citation2015). The amount of informational masking affects several mechanisms, including e.g. the ability to use a glimpsing strategy (Brungart Citation2001; Lingner et al. Citation2016).

Several of the aforementioned mechanisms of spatial hearing may be missing or severely impaired in listeners with hearing loss. Hence, in listening situations with more than one speaker hearing impaired subjects have considerable problems in sound localisation and speech recognition as compared to normal hearing listeners (Akeroyd et al. Citation2014; Gatehouse and Noble Citation2004; Noble, Byrne, and Ter-Horst Citation1997; Zahorik and Rothpletz Citation2014). In early studies, hearing impaired subjects were found to need up to 10 dB higher SNR to be able to achieve the same speech recognition performance as the normal hearing control group (Bronkhorst and Plomp Citation1992), accompanied by a lower SRM (Bronkhorst and Plomp Citation1992; Noble, Byrne, and Ter-Horst Citation1997; Peissig and Kollmeier Citation1997). Recent studies confirmed these findings, showing also an inverse relationship between the amount of achievable SRM and the severity of hearing loss (Glyde et al. Citation2013; Marrone, Mason, and Kidd Citation2008a). Symmetrical distribution of sound sources around the listener, and high informational masking were also found to decrease the performance of hearing impaired listeners. More specifically, the use of the glimpsing strategy appears to be greatly limited in hearing impaired listeners, possibly due to partial inaudibility of the target speech among other factors (Best et al. Citation2017).

The impact of hearing devices on binaural abilities is still poorly understood, especially in patients fitted with BCDs. BCDs rely on the transmission of vibrations through the skull bone and surrounding tissues rather than via conventional air conduction (AC) pathways. When a BC stimulus is applied at one location of the skull, both the cochleae are stimulated almost simultaneously, reducing the interaural separation and decreasing the ability to extract binaural cues (Zeitooni, Maki-Torkko, and Stenfelt Citation2016). Due to the crosstalk between both inner ears, a degradation of SRM compared to AC stimulation is expected under BC stimulation, as seen in Stenfelt and Zeitooni (Citation2013), where SRM for bilateral BC stimulation was approximately half of that for AC in normal hearing individuals. Other studies showed a decreased performance in speech recognition in noise in patients with bilateral BCDs and cochlear implants for specific target-noise configurations (Litovsky et al. Citation2006; Priwin et al. Citation2004). However, the interaural differences depend on the placement of the BC transducer, with bigger values for stimulation sites closer to the ipsilateral cochlea (Eeg-Olofsson, Stenfelt, and Granström Citation2011; Eeg-Olofsson et al. Citation2008; Stenfelt and Goode Citation2005). Given that the BCI is implanted in the mastoid part of the temporal bone, closer to the cochlea than e.g. the conventional bone anchored hearing aid (BAHA) screw, a higher SRM may be possible with this device as compared to other BCDs.

Given the increasing market for BCDs (Reinfeldt, Håkansson, Taghavi, and Eeg-Olofsson Citation2015), gaining more insights in the effect of BC rehabilitation concerning binaural ability is of great importance in order to better understand and evaluate the overall rehabilitation process. The current study evaluates speech recognition in competing speech and SRM in patients unilaterally fitted with the BCI. More specifically, the following research questions are addressed:

  1. Do patients with a unilateral BCI show SRM in a complex cocktail party listening environment?

  2. Is the effect of the BCI different for patients with bilateral hearing loss as compared to unilateral hearing loss?

  3. What is the unaided SRM for patients with conductive or mixed hearing loss?

Materials and methods

To investigate speech recognition threshold (SRT) and SRM in patients with conductive or mixed hearing loss unilaterally fitted with a BCI device, four conditions were tested on each study subject:

  1. Unaided SRT in co-located target/interferers configuration;

  2. Aided SRT in co-located target/interferers configuration;

  3. Unaided SRT in separated target/interferers configuration;

  4. Aided SRT in separated target/interferers configuration.

Details about the measurements and data handling procedures are given below.

The study was approved by the regional ethical committee in Gothenburg, Sweden. Informed consent was signed by every participant prior to the measurements.

Study subjects

Twelve subjects with conductive or mixed hearing loss were included in the study. The subjects are part of the clinical trial of the BCI and were fitted with the device for at least 8 months prior to the time of these measurements. The age of the participants at measurement ranged between 22 and 76 years (mean 43, median 48 years). The BC thresholds ranged between 0 and 50 dB HL at the implanted side, and between −10 and 50 dB HL at the contralateral side. The AC thresholds were 35–110 dB HL at the implant side, and 0–80 dB HL contralaterally. shows the audiograms at the implant side (left panel) and non-implant side (right panel) of the participants divided in two groups: with bilateral hearing loss (top row) and with unilateral hearing loss (bottom row). Patients assigned to the unilateral hearing loss group had, at the non-implant side, an AC PTA4 (pure tone average across 500, 1000, 2000 and 4000 Hz) equal or better than 25 dB HL, and air-bone gap ≤ 10 dB at each of the four PTA4 frequencies. These criteria allowed for mild to moderate hearing loss at single frequencies, as seen in . Detailed demographic information and hearing characteristics of each patient are found in .

Figure 1. Audiograms of the 12 study subjects: (A and B) Seven subjects with bilateral hearing loss; and (C and D) five subjects with unilateral hearing loss. Median values are indicated in bold line. Bi: bilateral; Uni: unilateral; AC: air conduction; BC: bone conduction.

Figure 1. Audiograms of the 12 study subjects: (A and B) Seven subjects with bilateral hearing loss; and (C and D) five subjects with unilateral hearing loss. Median values are indicated in bold line. Bi: bilateral; Uni: unilateral; AC: air conduction; BC: bone conduction.

Table 1 . Information about the study subjects.

Hearing device

The subjects were tested with (aided condition) and without (unaided condition) their own BCI device, and no additional hearing device was used on the contralateral side in any of the subjects. The BCI is a novel BCD classified as active transcutaneous (Reinfeldt, Håkansson, Taghavi, and Eeg-Olofsson Citation2015), with an implanted transducer, and an externally worn audio-processor (Håkansson et al. Citation2010; Taghavi et al. Citation2015). The external part, comprising two omnidirectional microphones, is magnetically retained over intact skin, and the signal is transmitted to the implanted unit via an induction link (Taghavi et al. Citation2015). No automatic functions, such as noise control or automatic gain control, have been implemented in the device yet, and the fitting was generally done with linear amplification using the computer-based software ARKbase (ON Semiconductor, Arizona, USA). The two microphones have potential for directional use in future applications, but no such option has been implemented yet, and therefore in this study they are considered omnidirectional.

The BCI is on a long-term clinical trial since 2012 (Eeg-Olofsson et al. Citation2014; Reinfeldt, Håkansson, Taghavi, Freden Jansson, et al. Citation2015), with the following inclusion criteria: (I) the pre-operative BC PTA4 on the implanted side should be 30 dB HL or lower, and (II) the difference between AC and BC thresholds in the implanted ear must be 20 dB or more. The hearing thresholds were measured for each patient prior to performing the speech tests for the present study, and it was confirmed that all the subjects still satisfy the initial inclusion criteria (see , columns “PTA4 at implanted side, BC” and “PTA4 at implanted side, AC”).

Measurement setup and procedure

The speech recognition measurements were performed in a sound booth of approximately 22 m3 (4.0 × 2.6 × 2.1 m), with reverberation time T30 = 0.09 s at 500 Hz and T30 = 0.07 s at 4000 Hz. The subjects were seated in the centre of the room, at a distance of 1.8 m from a loudspeaker (Bose 101 Music Monitor, Bose Corporation, Massachusetts, USA) at 0° azimuth. Four additional loudspeakers of the same model were present in the corners of the room, two in the frontal and two in the rear horizontal plane, at ±30° and ±150° azimuth respectively (see for a schematic representation).

Figure 2. Schematic representation of the measurement set-up in the two different spatial conditions: (A) Target signal (S) and noise (N) from the same loudspeaker co-located in front of the test subject (0° azimuth); (B) signal from frontal loudspeaker, noise from four loudspeakers at ± 30° and ± 150° around the test subject.

Figure 2. Schematic representation of the measurement set-up in the two different spatial conditions: (A) Target signal (S) and noise (N) from the same loudspeaker co-located in front of the test subject (0° azimuth); (B) signal from frontal loudspeaker, noise from four loudspeakers at ± 30° and ± 150° around the test subject.

The target speech (female voice) consisted of lists of ten five-words sentences developed by Hagerman (Hagerman Citation1982, Citation1993) and is widely used for speech-in-noise testing in Swedish, the patients’ native language. Each sentence consisted of five grammatically correct words with low semantic predictability in a fixed syntax (e.g. “Jonas gav elva röda skålar”, in translation: “Jonas gave eleven red bowls”). The interfering speech was taken from a recording of a male speaker reading a Swedish novel. Four different sections from the recording were continuously presented either from each of four loudspeakers positioned at ±30° and ±150° azimuth at ear level or co-located with the target signal (0° azimuth).

Two spatial configurations were tested on each subject, as shown in : (A) Target speech and four interferers from a frontal loudspeaker, and (B) target speech from a frontal loudspeaker, and four interferers from four separate loudspeakers. The first condition is referred to as co-located, the second one as spatially separated. In both cases, the overall level of the interferers was 63 dB SPL Ceq (12 min recording time), as measured at a position corresponding to the centre of the subject’s head. To accommodate for different degrees of hearing loss in subjects with bilateral hearing loss in the unaided condition, the overall level of the interferers was individually set at a comfortable level allowing the subjects to clearly hear both the target and the interfering speech at a SNR of 10 dB (see for details). Each subject was asked if they could hear both signals during a short presentation of the test prior to the measurements.

The subjects were informed that the target speech was from a female talker and that the interfering speech was from male talkers and instructed that they should face the frontal loudspeaker. No further information was given about the spatial location of the sound sources. One training list and two target lists played one after the other were used in each condition following the adaptive method described by Hagerman (Hagerman and Kinnefors Citation1995). This method keeps the interferers’ level constant while the target speech level is varied, ultimately resulting in the determination of a SRT score, corresponding to the difference in sound level between the target and the interferers when the subject is able to correctly understand 40% of the presented words. The training started at a SNR of +10 dB, and the speech level was then decreased in steps of 5, 3, and 2 dB until the number of correctly repeated words in a sentence was ≤ 2. After the training, the target speech was changed in steps of ±2, 1, or 0 dB depending on the number of correctly identified words. For a more detailed description of the procedure, including task explanation for patients and scoring method, see (Asp, Jakobsson, and Berninger Citation2018), where comparable measurements were performed. The SRT for normal hearing subjects (age 19–60 years, n = 13) with the same setup and procedure was measured to −15.3 dB in the separated condition and −11.6 in the co-located one (Asp and Reinfeldt Citation2019).

Each subject was tested in aided and unaided condition, and in co-located and spatially separated configurations, giving a total of four test sessions. The order of measurements was randomised to minimise possible learning effects.

Measured parameters and data analysis

From each test, the main outcome was the SRT in dB. A negative SRT indicates that the subject is capable of identifying speech when the level of noise is higher than the target.

The accuracy of each SRT measurement is assumed to be ±2.5 dB. This value corresponds to the 95% confidence interval for a single speech recognition measurement, determined through test-retest assessment in a previous study (Asp, Jakobsson, and Berninger Citation2018b), where the same setup was used for analogous measurements.

The SRT was analysed at a group level for the four different measurement conditions to investigate whether the median SRTs differ from each other. The unaided and aided conditions were compared to investigate the effect of hearing aid use on speech recognition in competing speech (SRT aided benefit). The analysis was also performed for two separate groups based on the hearing loss characteristic, namely unilateral or bilateral.

SRM was estimated as the individual difference in SRT between spatially separated and co-located conditions. A positive SRM indicates that the spatial separation of interferers and target speech improves the listener’s speech recognition ability in noise. SRM was compared in aided and unaided condition.

All the pairwise comparisons were statistically evaluated with the Wilcoxon matched pair test and the Holm and Hochberg corrections were applied in order to maintain an overall 95% confidence level.

Post-hoc analysis was performed to test a possible correlation between SRM and patients’ low frequency thresholds (250, 500 and 1000 Hz) as an estimate of ITD contribution, and between SRM and high frequency hearing thresholds (1500, 2000, 3000, 4000, 6000 and 8000 Hz) as an estimate for ILD contribution. The correlation was evaluated by fitting a linear regression model with the average AC thresholds at the non-implanted side as the independent variable, and SRM as dependent variable. The same investigation was carried out between hearing thresholds and SRT. PTA4 was also tested as an independent variable, combining low and high frequency characteristics. The same models were fitted with SRT co-located and separated condition as dependent variable. The quality of fit for the linear models was evaluated through the Coefficient of Determination, referred to as r-squared parameter.

All data handling and statistical analysis were performed in MatLab R2016a (MathWorks, Inc., Natick, MA, U.S.A.).

Results

The SRM, quantifying the change in SRT due to spatial separation of target and interferers, is illustrated in for the unaided and aided conditions.

Figure 3. Spatial Release from Masking (SRM) measured in unaided and aided condition: (A) histogram, unaided condition, (B) histogram, aided condition, (C) Unaided and aided SRM, with data from the same subject connected by a dotted line. Subjects with unilateral and bilateral hearing loss are distinguishable by the different markers, dot and triangle respectively. One aided data point (−4.5 dB) has been omitted in order to have a more suitable scale on the y-axis.

Figure 3. Spatial Release from Masking (SRM) measured in unaided and aided condition: (A) histogram, unaided condition, (B) histogram, aided condition, (C) Unaided and aided SRM, with data from the same subject connected by a dotted line. Subjects with unilateral and bilateral hearing loss are distinguishable by the different markers, dot and triangle respectively. One aided data point (−4.5 dB) has been omitted in order to have a more suitable scale on the y-axis.

As seen in , all the study subjects had a positive SRM in the unaided condition. In the aided condition, nine of the twelve subjects (75%) showed a positive SRM (). shows how the SRM changes from unaided to aided within a subject. As seen in the figure, 9 of 12 subjects (75%) achieved a decrease in SRM, whereas three subjects (one with unilateral and two with bilateral hearing loss) demonstrated increased SRM. also illustrates that the direction of change from unaided to aided SRM did not seem to depend on the magnitude of the unaided SRM.

Both in aided and unaided conditions, the SRM values ranged between −1 and +5 dB, with one exception: one patient experienced a remarkably low SRM of −4.5 dB when measured with the device on. This data point is not shown in and to allow for a clearer representation of the other data points, but it was not excluded from the statistical analysis.

Figure 4. Boxplot of SRT (co-located and separated condition) and SRM, shown separately for patients with bilateral and unilateral hearing loss. Mean values are indicated with a green asterisk, medians with a red horizontal line. Individual data points are plotted in grey colour for patients with unilateral and bilateral hearing loss in circles and triangles, respectively. The y-axis in the right panel is limited between −1 and 5 dB, and one bilateral aided SRM data point (−4.5 dB) is therefore not shown. SRT: Speech Recognition Threshold; SRM: Spatial Release from Masking; uni: unilateral hearing loss; bi: bilateral hearing loss.

Figure 4. Boxplot of SRT (co-located and separated condition) and SRM, shown separately for patients with bilateral and unilateral hearing loss. Mean values are indicated with a green asterisk, medians with a red horizontal line. Individual data points are plotted in grey colour for patients with unilateral and bilateral hearing loss in circles and triangles, respectively. The y-axis in the right panel is limited between −1 and 5 dB, and one bilateral aided SRM data point (−4.5 dB) is therefore not shown. SRT: Speech Recognition Threshold; SRM: Spatial Release from Masking; uni: unilateral hearing loss; bi: bilateral hearing loss.

The Wilcoxon signed rank test revealed that SRM in the unaided condition (median: 1.95 dB) was significantly different from zero, whereas it was not in aided condition (median: 1.15 dB), as shown in , row 1 and 4. The difference between aided and unaided SRM was deemed not significant, as shown in , row 2. Additional details about all the pairwise comparisons performed can be found in .

Table 2. Results of pair comparisons of aided and unaided conditions as well as distractor placement conditions are shown.

In , SRT and SRM results are plotted for unilaterally and bilaterally impaired patients. The leftmost and the middle panel of reveal that patients with unilateral hearing loss achieved a lower SRT both in co-located and separated listening condition, thus have a better tolerance to interfering speech compared to the bilateral group and probably also better audibility of the target signal than patients with bilateral hearing loss despite increased target presentation level in that group. No significant differences in SRM were found between the two groups, as shown in the rightmost plot of .

The SRT measured in the unaided condition shows a high variability among patients, with values ranging between –12.8 and 3.6 dB in the co-located case, and between –15 and 2.8 dB in the separated condition. Results obtained in BCI-aided condition show a decreased interindividual variability as compared to the unaided condition in both separated (–14.2 dB to –4.1 dB) and co-located conditions (–13.1 dB to –4.8 dB). Individual SRT values for each patient and condition are found in .

Table 3. Speech Recognition Thresholds (SRT) obtained from measurements in co-located and separated configuration for all the study subjects.

The mean and the median SRT values for the aided condition were approximately 2 dB lower than in the unaided condition for the same sound sources configuration: in the co-located case, the mean (median) threshold was reduced from –4.9 (–5.5) dB to –7.8 (–6.7) dB, and in the separated case, from –6.8 (–7.5) dB to –8.7 (–8.3) dB. This indicates a benefit given by the hearing device in terms of increased ability of understanding speech in noisy environments. However, these differences were not statistically significant.

Post-hoc analyses

Warble tone thresholds in aided condition measured in sound field ranged 13–34 dB HL 6 months after initial fitting of the audio processor (Reinfeldt, Håkansson, Taghavi, Freden Jansson, et al. Citation2015). Hence, good audibility of both the target signal and the competing speech on the implant side can be assumed. In the non-implanted ear, hearing thresholds varied greatly across the patient group (see ). To test if these thresholds were related to the aided performance, post-hoc linear regression analyses of the SRTs as a function of low-frequency, high-frequency, and PTA4 AC tone thresholds averages in the non-implanted ear were performed for the co-located SRT, the separated SRT, and the SRM results. Low- and high-frequency averages were used to separate the impact of ITD and ILD cues. In , the results are shown for the co-located and the separated aided SRT (top and bottom row, respectively). Based on the R squared parameter, the best fit is obtained when the low frequency hearing threshold is used as independent variable (), indicating that the low frequency audibility can predict the SRT more accurately than the high frequency audibility in both spatial configurations (R-squared value 0.54 versus 0.33 in the co-located condition, and 0.61 versus 0.39 in the separated case). The estimated slopes are between 0.06 and 0.1, suggesting a gain of approximately 1 dB in SRT per 10 dB improvement of the hearing thresholds in the non-implanted ear. All the estimated slopes were statistically significant at 95% confidence level.

Figure 5. Post-hoc linear regression analyses of the aided SRT as a function of average hearing thresholds for low frequencies (LF; left column) high frequencies (HF; middle column), and pure tone average (PTA4; right column) in co-located and separated target-interferer spatial configurations. SRT: Speech Recognition Threshold.

Figure 5. Post-hoc linear regression analyses of the aided SRT as a function of average hearing thresholds for low frequencies (LF; left column) high frequencies (HF; middle column), and pure tone average (PTA4; right column) in co-located and separated target-interferer spatial configurations. SRT: Speech Recognition Threshold.

A linear regression analysis with analogous method was conducted on SRM results. The linear fitted models had all a rather poor quality of fit (R-squared of at most 0.18), suggesting that the AC hearing thresholds in the non-implant ear are not suitable predictors for the SRM in these specific experimental conditions.

shows a regression analysis performed on SRT in separate condition as a function of PTA4 level for unilaterally and bilaterally impaired patients, separately. The analysis shows that contralateral hearing level affects the achieved SRT both in aided as well as unaided condition for patients with unilateral hearing loss (, leftmost side). For patients with a bilateral hearing loss, thus with much more impaired hearing on the contralateral side, the unaided results seem to be improved when the PTA4 is lower, whereas in the aided condition, the same patients seem to achieve a similar SRT regardless of their contralateral PTA4 (, top-right side, slope = 0.056). The analysis of SRT when target speech and interfering speech are co-located gave similar results (aided: slope = 0.18 and −0.053 for unilaterally and bilaterally impaired patients, respectively; unaided: slope = 0.15 and 0.26) to the separated condition shown in .

Figure 6. Correlation between separated SRT and PTA4 for the non-implanted ear for patients with unilateral and bilateral hearing loss separately. PTA4 is the average of tone thresholds at 500, 1000, 2000, and 4000 Hz. SRT is the speech recognition threshold when 40% of the target speech is correctly understood.

Figure 6. Correlation between separated SRT and PTA4 for the non-implanted ear for patients with unilateral and bilateral hearing loss separately. PTA4 is the average of tone thresholds at 500, 1000, 2000, and 4000 Hz. SRT is the speech recognition threshold when 40% of the target speech is correctly understood.

Results from suggest that patients with a higher degree of hearing loss on the contralateral side (thus bilateral hearing loss) benefit the most from using the BCI when listening to speech in competing speech. This finding is confirmed by the results shown in , where the SRT aided benefit is correlated to the PTA4 at the contralateral side. The linear fits for unilateral hearing loss patients (, left side) show a nearly flat slope (0.038 and −0.027 for separated and co-located condition, respectively) with a very poor quality of fit (R-squared = 0.11 and 0.039). These fit parameters indicate that, when listening in noisy environments, patients with normal or near normal hearing at the contralateral side are affected by the use of their device in a way that is not linearly related to their contralateral PTA4. On the other hand, the right-hand side plots corresponding to bilaterally impaired patients, show a clearer correlation between residual hearing and SRT aided benefit, where the patients with less residual hearing (higher PTA4) are the ones who benefit the most from their BCD (slope = 0.26 and 0.31 for separated and co-located condition, respectively). This observation is in line with top-right plot, where the bilaterally impaired patients are found to all reach a similar SRT when they use the device regardless of their unaided contralateral PTA4.

Figure 7. Correlation between SRT aided benefit and PTA4 at the contralateral ear. PTA4 is the average of tone thresholds at 500, 1000, 2000, and 4000 Hz. SRT aided benefit is calculated as the difference between aided and unaided SRT in the same spatial configuration. A positive benefit corresponds to a lower SRT in aided condition compared to the unaided one. Patients are divided in two groups, with unilateral and bilateral hearing loss. SRT: Speech Recognition Threshold.

Figure 7. Correlation between SRT aided benefit and PTA4 at the contralateral ear. PTA4 is the average of tone thresholds at 500, 1000, 2000, and 4000 Hz. SRT aided benefit is calculated as the difference between aided and unaided SRT in the same spatial configuration. A positive benefit corresponds to a lower SRT in aided condition compared to the unaided one. Patients are divided in two groups, with unilateral and bilateral hearing loss. SRT: Speech Recognition Threshold.

Discussion

Spatial release from masking

In this study, individuals with conductive or mixed hearing loss were tested in a cocktail party task with symmetrically placed and co-located interferers to investigate SRM. The SRM was calculated in patients fitted unilaterally with the BCI device both in unaided and aided condition, resulting in a mean (median) SRM of 2.0 (1.95) and 0.86 (1.15) dB respectively. For normal hearing subjects in the same experimental conditions and stimuli, the SRM was previously calculated to be 3.7 dB (Asp and Reinfeldt Forthcoming).

In the aided condition, patients don’t seem to be able to take significant advantage of the spatial separation of sound sources, i.e. the SRM is not significantly different from zero. However, it should be observed that the data analysis was heavily affected by one single SRM value of −4.5 dB. This low performance might be explained by the fact that the subject who obtained it is not as familiar as the other participants in using the device. The data point was classified as an outlier according to the boxplot test, falling below the lower whisker (75th percentile: 1.95, 25th percentile: 0.1, lower whisker: −2.675), and also according to Hampel’s identificator (Davies and Gather Citation1993). Excluding this subject from the statistical analysis would lead to a p-value of 0.0078 from the Wilcoxon test, and the aided SRM would then be deemed statistically diferent from zero even after the Holm/Hochberg correction.

On the other hand, low SRM values in aided condition are expected when taking into consideration the crosstalk introduced by BC stimulation, although different BCDs may lead to different amount of crosstalk. One way of quantifying the cross-stimulation is to measure the transcranial attenuation (TA), i.e. to what extent the signal reaching the contralateral cochlea is attenuated with respect to the one at the ipsilateral side. In the context of binaural hearing and SRM, a higher TA would theoretically be preferable as it would allow distinct binaural cues. To measure TA, two main methods have been used in the litterature: (1) calculate TA as the difference in hearing thresholds in unilaterally deaf subjects stimulated ipsi- and contra-laterally by BC (Nolan and Lyon Citation1981; Snyder Citation1973; Stenfelt Citation2012), and (2) look at the difference in acceleration at both cochlear promontories (Eeg-Olofsson et al. Citation2008; Håkansson et al. Citation2010; Stenfelt and Goode Citation2005). As an overall trend, the TA was found to be frequency dependent and generally higher in the high frequency range and when the stimulation is applied closer to the ipsilateral cochlea. An advantage of the BCI compared to e.g. a BAHA in the context of cross-stimulation is that the BCI is implanted in the mastoid bone, and should therefore give greater TA than an implant at the BAHA position (Eeg-Olofsson, Stenfelt, and Granström Citation2011; Stenfelt Citation2012). Contradicting this hypothesis, a study by Zeitooni, Maki-Torkko, and Stenfelt (Citation2016) showed similar results in terms of SRM for bilateral BC stimulation applied at the mastoid and at the BAHA position, in both cases reaching approximately half of the benefit (in dB) when compared to the AC stimulation. However, the study was performed in normal hearing subjects, with bilateral stimulation, and with a single interferer, making the measurement conditions sensibly different from the current study.

Bilaterally hearing impaired patients have been shown in several studies to benefit from being fitted with BAHAs on both sides with regard to sound localisation and speech recognition in noise (Colquitt et al. Citation2011; Priwin et al. Citation2004; Stenfelt and Zeitooni Citation2013; Zeitooni, Maki-Torkko, and Stenfelt Citation2016). Possibly, the SRM would increase if a second BCI was used compared to the current measurements, where all the subjects were fitted on one side only. This is however just a speculation that cannot be tested at present as no bilateral BCI implantation has been performed yet. Additionally, changes in the device settings may also improve SRM in aided conditions.

Speech recognition threshold

The SRT measured in this study was between −15 and 3.6 dB in unaided condition, and −14.2 to −4.1 dB in aided condition, with a mean difference of approximately 2 dB between unaided and aided condition in the same spatial configuration (see ). This aided benefit is in line with the results presented in a study by Berninger and Karlsson (Citation1999), where an analogous set-up was used for testing of speech in competing speech in separated configuration on normal hearing subjects and on subjects fitted with conventional AC hearing aids, unilaterally or bilaterally. The average improvement in SRT from unaided to aided condition for the hearingimpaired population found in that study was approximately 2.5 dB.

Figure 8. Speech Recognition Thresholds (SRT) obtained in the four measurement conditions: target and interferer from the same loudspeaker (coloc) or from spatially separated ones (sep), without (unaided) and with the BCI device (aided). The boxplot shows how the data is distributed, with mean and median values marked by a black star and a red horizontal bar, respectively. Individual data points are plotted as orange circles and green triangles for patients with unilateral and bilateral hearing loss, respectively.

Figure 8. Speech Recognition Thresholds (SRT) obtained in the four measurement conditions: target and interferer from the same loudspeaker (coloc) or from spatially separated ones (sep), without (unaided) and with the BCI device (aided). The boxplot shows how the data is distributed, with mean and median values marked by a black star and a red horizontal bar, respectively. Individual data points are plotted as orange circles and green triangles for patients with unilateral and bilateral hearing loss, respectively.

highlights that the majority of patients achieved a SRT below zero in all conditions, indicating that they were able to understand 40% of the target signal when the level of noise was higher than the target itself. The best results were obtained in separated aided condition, which in turns corresponds to the most realistic scenario for a cocktail party situation. However, the benefit appears to originate from the use of the device itself rather than from the spatial separation of the sound sources, as indicated by the small, and in three cases negative, SRM (). In the aided condition, patients achieved a better SRT in both co-located and separated configuration, with substantially decreased between-subjects variability when compared to the unaided condition. This suggests that patients with a rather limited performance in the unaided case would be the ones experiencing the greatest benefit from the device.

shows the change in SRT from unaided to aided condition for co-located and separated configuration on the y and x axis, respectively. The unaided and aided conditions for the same patient are connected by an arrow, with the head pointing at the aided condition. The length of the arrow illustrates the overall change when the device is used, with the horizontal component representing the improvement in separated configuration and the vertical one in co-located configuration. From the figure, a difference in length and direction between the arrows for unilaterally impaired patients (orange circles) as compared to the bilaterally impaired ones (green triangles) can be noticed. Once again, this is an indication that with normal or near-normal hearing thresholds at the non-implanted side, the BCI has little influence on SRT. However, the figure should be interpreted with care, as the unaided performance could not be directly correlated to the SRT aided benefit, since the SRT was measured at different sound levels in unaided and aided conditions for the bilaterally impaired patients, making inter-subject comparisons hard to read.

Figure 9. Speech Recognition Threshold (SRT) in co-located and separated configuration. The origins of the arrows represent the unaided result, the heads point at aided results. Values for normal hearing subjects estimated from a previous study (Asp and Reinfeldt Forthcoming) are marked with an asterisk. SRTs from patients with unilateral and bilateral hearing loss are marked with orange circles and green triangles, respectively. SRM (Spatial Release from Masking) is positive above the unity line (SRT separated: SRT co-located, dotted line) and negative below.

Figure 9. Speech Recognition Threshold (SRT) in co-located and separated configuration. The origins of the arrows represent the unaided result, the heads point at aided results. Values for normal hearing subjects estimated from a previous study (Asp and Reinfeldt Forthcoming) are marked with an asterisk. SRTs from patients with unilateral and bilateral hearing loss are marked with orange circles and green triangles, respectively. SRM (Spatial Release from Masking) is positive above the unity line (SRT separated: SRT co-located, dotted line) and negative below.

On the other hand, unaided SRT can be hypothesised to be related to sound field tone thresholds in unaided condition, and therefore the SRT aided benefit was analysed as a function of the unaided PTA4 in the non-implant ear. The results, presented in , confirm the hypothesis that more severely impaired patients benefit the most from using the device, with a positive correlation between SRT aided benefit and PTA4: the higher the PTA4 in the non-implant ear (more severe hearing loss), the greater the benefit. This observation is valid for subjects with bilateral hearing loss, while those with unilateral impairment experience essentially neither a benefit nor a deterioration of the SRT. This result indicates that the BCI did not negatively affect their SRT in competing speech despite that the two interfering signals on the aided side should become more audible in the aided condition, resulting in a higher informational masking. Instead, the aided SRT was found to be strictly related to the contralateral hearing thresholds, as shown in (top panel, left hand side fit). These results have clinical implications for management of patients with bilateral conductive hearing loss and a unilateral BCI: if the PTA4 on the non-implant side is in the range of 40 to 50 dB HL, performance may not increase after bilateral implantation (see ).

Study limitations

The utilised listening set-up consisted of a frontal target speech and four symmetrically distributed interferers. To investigate SRM, this set-up is less favourable than an asymmetric distribution, which would give more room for contributions e.g. from the head shadow effect. As an example, Bronkhorst and Plomp (Citation1992) report an SRM of 5 dB when two speech-modulated noise interferers are placed symmetrically with respect to the listener, while in an asymmetrical configuration the value reached up to approximately 7 dB according to a study by Hawley, Litovsky, and Culling (Citation2004). With four to six interferers, Bronkhorst and Plomp (Citation1992) showed an increase in SRM from less than 2 dB to approximately 3 dB when changing to an asymmetrical interferers configuration. Furthermore, the head shadow component is additionally reduced when the number of interferers increases, as adding sound sources significantly decreased the difference between better- and worse- ear (Bronkhorst and Plomp Citation1992). In the current study, a better ear glimpsing strategy was hypothetically possible in the separated condition, allowed by the instantaneously asymmetric masking level provided by the interfering signals, where natural pauses would occur at different moments. However, in hearing impaired listeners, the ability to use a glimpsing strategy is reduced (Best et al. Citation2017), constituting yet another factor limiting the SRM.

Another aspect that might have contributed to lower the SRM is the subjects’ level of attention (for a review, see e.g. (Bronkhorst Citation2015)) and pre-knowledge about the target signal. In the present study, the subjects had information about the target voice (female speaker) but not about its location. Previous studies (Ericson, Brungart, and Simpson Citation2004; Ihlefeld, Sarwar, and Shinn‐Cunningham Citation2006; Kidd et al. Citation2005) showed that knowing the target location results in higher scores than knowing its voice, and the combination of both cues leads to the best performance. Target insecurity is one of the factors affecting the performance in tasks requiring so-called divided attention (Drullman and Bronkhorst Citation2000), as opposed to purely selective listening abilities which are addressed in regular speech-in-noise tests. Although the subjects may have become aware of the target voice location after the training session, they did not receive any confirmation, keeping a level of uncertainty which may have played a role during the test.

Age of participants has also been suggested as an influencing factor in previous studies investigating speech recognition and SRM (Dubno, Ahlstrom, and Horwitz Citation2008; Kathleen Pichora-Fuller, Schneider, and Daneman Citation1995; Marrone et al., Citation2008b), especially with symmetric masker placement, given that this set-up suppresses the simple better ear effect, giving more space to binaural and higher-level processes. However, these studies were not able to segregate unambiguously the effect of age independent of hearing status, and the overall conclusions tend to relate age with a general difficulty in ignoring irrelevant stimuli seen in hearing impaired subjects, probably, but not explicitly, worsened by increased age (Marrone et al. 2008). More conclusive results were reported by Fullgrabe, Moore, and Stone (Citation2014), demonstrating that cognition and sensitivity to temporal fine-structure were the best predictors of speech recognition in noise as revealed by comparing performance of audiometrically matched old and young listeners. In that study, however, SRM was not affected by age. In the current study, post-hoc linear regression was tested on the SRM data against age of participants, resulting in a very poor fit indicating no apparent correlation. However, due to the small sample size and inhomogeneous composition of the subjects, the age effect could not be investigated with high statistical power and is therefore not to be excluded as an influencing factor.

The low number of participants (N = 12) is one of the main limitations of the present study, which could not be improved due to the available number of patients implanted with the BCI device so far. Another limitation, as mentioned above, is the heterogeneity of the patient group, including different aetiology and degree of hearing loss. One consequence was the need for using individual presentation levels in the unaided condition in the bilaterally impaired patients, which was deemed necessary to make sure that the participants could hear the signals used. This approach may have resulted in that the measured SRT reflects the threshold of recognition of the target signal itself, rather than the SRT in competing speech. However, this is a remote possibility, given that each subject was exposed to the signals prior to the measurements, and was asked whether they could ear both noise and target speech. The individual presentation level is still problematic when aided and unaided SRT are compared in the bilateral hearing loss group, because the audibility (in both ears) is different between the two conditions for another reason than the BCI. However, the primary aim was to assess SRM which is a relative measure within each condition (aided and unaided), where the separated and co-located configurations had the same presentation level.

Conclusions

The ability of recognising speech in co-located and symmetrically separated competing speech was studied in patients with unilateral and bilateral conductive or mixed hearing loss fitted with the active transcutaneous device BCI in a cocktail party setup.

Overall, aided SRM amounted to a median value of 1.15 dB, with no statistical difference from zero. A trend to lower SRM in aided condition was seen in 75% of the patients. However, SRM was still positive in nine of twelve subjects, indicating a certain ability of SRM under unilateral direct drive BC stimulation in the utilised measurement setup.

No difference was observed in aided SRM between patients with unilateral hearing loss and bilateral hearing loss (median 1.1 dB and 1.4 dB, respectively). For patients with bilateral hearing loss the SRT aided benefit was related to the degree of their hearing loss, i.e. larger for severe hearing loss and smaller for moderate hearing loss.

Acknowledgements

The authors would like to thank Maria Drott, Kerstin Henricson and Ann-Charlotte Persson for assistance in performing the audiological measurements, and the study subjects for their participation.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This study was supported by the Hasselblad Foundation and the Swedish Research Council (VR) with number 621-2013-6027.

References

  • Akeroyd, M. A., F. H. Guy, D. L. Harrison, and S. L. Suller. 2014. “A Factor Analysis of the SSQ (Speech, Spatial, and Qualities of Hearing Scale).” International Journal of Audiology 53 (2): 101–114. doi:10.3109/14992027.2013.824115.
  • Asp, F., A.-M. Jakobsson, and E. Berninger. 2018. “The Effect of Simulated Unilateral Hearing Loss on Horizontal Sound Localization Accuracy and Recognition of Speech in Spatially Separate Competing Speech.” Hearing Research 357: 54–63. doi:10.1016/j.heares.2017.11.008.
  • Asp, F., and S. Reinfeldt. 2018. “Horizontal Sound Localisation Accuracy in Individuals with Conductive Hearing Loss: Effect of the Bone Conduction Implant.” International Journal of Audiology 57 (9): 657–664. doi:10.1080/14992027.2018.1470337.
  • Asp, F., and S. Reinfeldt. 2019. “Effects of Simulated and Profound Unilateral Sensorineural Hearing Loss on Recognition of Speech in Competing Speech.” Ear and hearing doi:10.1097/AUD.0000000000000764.
  • Berninger, E., and K. K. Karlsson. 1999. “Clinical Study of Widex Senso on First-Time Hearing Aid Users.” Scandinavian Audiology 28 (2): 117–125. doi:10.1080/010503999424842.
  • Best, V., C. R. Mason, J. Swaminathan, E. Roverud, and G. Kidd. 2017. “Use of a Glimpsing Model to Understand the Performance of Listeners with and without Hearing Loss in Spatialized Speech Mixtures.” The Journal of the Acoustical Society of America 141 (1): 81–91. doi:10.1121/1.4973620.
  • Bronkhorst, A. W. 2000. “The Cocktail Party Phenomenon: A Review of Research on Speech Intelligibility in Multiple-Talker Conditions.” Acta Acustica United with Acustica 86: 117–128. http://www.ingentaconnect.com/
  • Bronkhorst, A. W. 2015. “The Cocktail-Party Problem Revisited: Early Processing and Selection of Multi-Talker Speech.” Attention, Perception, & Psychophysics 77 (5): 1465–1487. doi:10.3758/s13414-015-0882-9.
  • Bronkhorst, A. W., and R. Plomp. 1988. “The Effect of Head-Induced Interaural Time and Level Differences on Speech Intelligibility in Noise.” The Journal of the Acoustical Society of America 83 (4): 1508–1516. doi:10.1121/1.395906.
  • Bronkhorst, A. W., and R. Plomp. 1992. “Effect of Multiple Speechlike Maskers on Binaural Speech Recognition in Normal and Impaired Hearing.” The Journal of the Acoustical Society of America 92 (6): 3132–3139. doi:10.1121/1.404209.
  • Brungart, D. S. 2001. “Informational and Energetic Masking Effects in the Perception of Two Simultaneous Talkers.” The Journal of the Acoustical Society of America 109 (3): 1101–1109. doi:10.1121/1.1345696.
  • Brungart, D. S., and B. D. Simpson. 2002. “The Effects of Spatial Separation in Distance on the Informational and Energetic Masking of a Nearby Speech Signal.” The Journal of the Acoustical Society of America 112 (2): 664–676. doi:10.1121/1.1490592.
  • Cherry, E. C. 1953. “Some Experiments on the Recognition of Speech, with One and with Two Ears.” The Journal of the Acoustical Society of America 25 (5): 975–979. doi:10.1121/1.1907229.
  • Cherry, E. C., and W. K. Taylor. 1954. “Some Further Experiments upon the Recognition of Speech, with One and with Two Ears.” The Journal of the Acoustical Society of America 26 (4): 554–559. doi:10.1121/1.1907373.
  • Colquitt, J. L., E. Loveman, D. M. Baguley, T. E. Mitchell, P. Z. Sheehan, P. Harris, D. W. Proops, J. Jones, A. J. Clegg, and K. Welch. 2011. “Bone-Anchored Hearing Aids for People with Bilateral Hearing Impairment: A Systematic Review.” Clinical Otolaryngology 36 (5): 419–441. doi:10.1111/j.1749-4486.2011.02376.x.
  • Davies, L., and U. Gather. 1993. “The Identification of Multiple Outliers.” Journal of the American Statistical Association 88 (423): 782–792. doi:10.2307/2290763.
  • Drullman, R., and A. W. Bronkhorst. 2000. “Multichannel Speech Intelligibility and Talker Recognition Using Monaural, Binaural, and Three-Dimensional Auditory Presentation.” The Journal of the Acoustical Society of America 107 (4): 2224–2235. doi:10.1121/1.428503.
  • Dubno, J. R., J. B. Ahlstrom, and A. R. Horwitz. 2008. “Binaural Advantage for Younger and Older Adults with Normal Hearing.” Journal of Speech, Language, and Hearing Research 51 (2): 539–556. doi:10.1044/1092-4388(2008/039).
  • Eeg-Olofsson, M., B. Håkansson, S. Reinfeldt, H. Taghavi, H. Lund, K. J. Jansson, E. Håkansson, and J. Stalfors. 2014. “The Bone Conduction Implant–First Implantation, Surgical and Audiologic Aspects.” Otology & Neurotology 35 (4): 679–685. doi:10.1097/MAO.0000000000000203
  • Eeg-Olofsson, M., S. Stenfelt, and G. Granström. 2011. “Implications for Contralateral Bone-Conducted Transmission as Measured by Cochlear Vibrations.” Otology & Neurotology : Official Publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology 32 (2): 192–198. doi:10.1097/MAO.0b013e3182009f16.
  • Eeg-Olofsson, M., S. Stenfelt, A. Tjellström, and G. Granström. 2008. “Transmission of Bone-Conducted Sound in the Human Skull Measured by Cochlear Vibrations.” International Journal of Audiology 47 (12): 761–769. doi:10.1080/14992020802311216.
  • Ericson, M. A., D. S. Brungart, and B. D. Simpson. 2004. “Factors That Influence Intelligibility in Multitalker Speech Displays.” The International Journal of Aviation Psychology, 14 (3): 313–334. doi:10.1207/s15327108ijap1403_6.
  • Freyman, R. L., K. S. Helfer, D. D. McCall, and R. K. Clifton. 1999. “The Role of Perceived Spatial Separation in the Unmasking of Speech.” The Journal of the Acoustical Society of America 106 (6): 3578–3588. doi:10.1121/1.428211.
  • Fullgrabe, C., B. C. Moore, and M. A. Stone. 2014. “Age-Group Differences in Speech Identification despite Matched Audiometrically Normal Hearing: contributions from Auditory Temporal Processing and Cognition.” Frontiers in Aging Neuroscience 6: 347.
  • Gatehouse, S., and W. Noble. 2004. “The Speech, Spatial and Qualities of Hearing Scale (SSQ).” International Journal of Audiology 43 (2): 85–99. doi:10.1080/14992020400050014.
  • Glyde, H., S. Cameron, H. Dillon, L. Hickson, and M. Seeto. 2013. “The Effects of Hearing Impairment and Aging on Spatial Processing.” Ear and Hearing 34 (1): 15–28. doi:10.1097/AUD.0b013e3182617f94.
  • Grothe, B., M. Pecka, and D. McAlpine. 2010. “Mechanisms of Sound Localization in Mammals.” Physiological Reviews 90 (3): 983–1012. doi:10.1152/physrev.00026.2009.
  • Hagerman, B. 1982. “Sentences for Testing Speech Intelligibility in Noise.” Scandinavian Audiology 11 (2): 79–87. doi:10.3109/01050398209076203.
  • Hagerman, B. 1993. “Efficiency of Speech Audiometry and Other Tests.” British Journal of Audiology 27 (6): 423–425. doi:10.3109/03005369309076719.
  • Hagerman, B., and C. Kinnefors. 1995. “Efficient Adaptive Methods for Measuring Speech Reception Threshold in Quiet and in Noise.” International Journal of Audiology 24 (1): 71–77. doi:10.3109/14992029509042213.
  • Håkansson, B., S. Reinfeldt, M. Eeg-Olofsson, P. Östli, H. Taghavi, J. Adler, J. Gabrielsson, S. Stenfelt, and G. Granström. 2010. “A Novel Bone Conduction Implant (BCI): Engineering Aspects and Pre-Clinical Studies.” International Journal of Audiology 49 (3): 203–215. doi:10.3109/14992020903264462.
  • Hawley, M. L., R. Y. Litovsky, and J. F. Culling. 2004. “The Benefit of Binaural Hearing in a Cocktail Party: effect of Location and Type of Interferer.” The Journal of the Acoustical Society of America 115 (2): 833–843. doi:10.1121/1.1639908.
  • Ihlefeld, A., S. J. Sarwar, and B. G. Shinn‐Cunningham. 2006. “Spatial Uncertainty Reduces the Benefit of Spatial Separation in Selective and Divided Listening.” The Journal of the Acoustical Society of America 119 (5): 3417–3417. doi:10.1121/1.4786823.
  • Kathleen Pichora-Fuller, M., B. A. Schneider, and M. Daneman. 1995. “How Young and Old Adults Listen to and Remember Speech in Noise.” The Journal of the Acoustical Society of America 97: 593–608. doi:10.1121/1.412282.
  • Kidd, G., Jr., T. L. Arbogast, C. R. Mason, and F. J. Gallun. 2005. “The Advantage of Knowing Where to Listen.” The Journal of the Acoustical Society of America 118 (6): 3804–3815. doi:10.1121/1.2109187.
  • Lingner, A., B. Grothe, L. Wiegrebe, and S. D. Ewert. 2016. “Binaural Glimpses at the Cocktail Party?” Journal of the Association for Research in Otolaryngology : JARO 17 (5): 461–473. doi:10.1007/s10162-016-0575-7.
  • Litovsky, R. 2015. “Development of the Auditory System.” Handbook of Clinical Neurology 129: 55–72. doi:10.1016/B978-0-444-62630-1.00003-2.
  • Litovsky, R. Y., P. M. Johnstone, S. Godar, S. Agrawal, A. Parkinson, R. Peters, and J. Lake. 2006. “Bilateral Cochlear Implants in Children: Localization Acuity Measured with Minimum Audible Angle.” Ear and Hearing 27 (1): 43–59. doi:10.1097/01.aud.0000194515.28023.4b.
  • Marrone, N., C. R. Mason, and G. Kidd. 2008a. “Tuning in the Spatial Dimension: Evidence from a Masked Speech Identification Task.” The Journal of the Acoustical Society of America 124 (2): 1146–1158. doi:10.1121/1.2945710.
  • Marrone, N., C. R. Mason, and G. Kidd. Jr 2008b. “The Effects of Hearing Loss and Age on the Benefit of Spatial Separation between Multiple Talkers in Reverberant Rooms.” The Journal of the Acoustical Society of America 124 (5): 3064–3075. doi:10.1121/1.2980441.
  • Noble, W., D. Byrne, and K. Ter-Horst. 1997. “Auditory Localization, Detection of Spatial Separateness, and Speech Hearing in Noise by Hearing Impaired Listeners.” The Journal of the Acoustical Society of America 102 (4): 2343–2352. doi:10.1121/1.419618.
  • Nolan, M., and D. J. Lyon. 1981. “Transcranial Attenuation in Bone Conduction Audiometry.” The Journal of Laryngology & Otology 95 (6): 597–608. doi:10.1017/S0022215100091155.
  • Peissig, J., and B. Kollmeier. 1997. “Directivity of Binaural Noise Reduction in Spatial Multiple Noise-Source Arrangements for Normal and Impaired Listeners.” The Journal of the Acoustical Society of America 101 (3): 1660–1670. doi:10.1121/1.418150.
  • Priwin, C., S. Stenfelt, G. Granström, A. Tjellström, and Bo Håkansson. 2004. “Bilateral Bone-Anchored Hearing Aids (BAHAs): An Audiometric Evaluation.” The Laryngoscope 114 (1): 77–84. doi:10.1097/00005537-200401000-00013.
  • Reinfeldt, S., B. Håkansson, H. Taghavi, and M. Eeg-Olofsson. 2015. “New Developments in Bone-Conduction Hearing Implants: A Review.” Medical Devices (Auckland, N.Z.) 8: 79–93. doi:10.2147/MDER.S39691.
  • Reinfeldt, S., B. Håkansson, H. Taghavi, K. J. Freden Jansson, and M. Eeg-Olofsson. 2015. “The Bone Conduction Implant: Clinical Results of the First Six Patients.” International Journal of Audiology 54 (6): 408–416. doi:10.3109/14992027.2014.996826
  • Snyder, J. M. 1973. “Interaural Attenuation Characteristics in Audiometry.” Laryngoscope 83 (11): 1847–1855.
  • Stenfelt, S. 2012. “Transcranial Attenuation of Bone-Conducted Sound When Stimulation is at the Mastoid and at the Bone Conduction Hearing Aid Position.” Otology & Neurotology 33 (2): 105–114. doi:10.1097/MAO.0b013e31823e28ab.
  • Stenfelt, S., and R. L. Goode. 2005. “Transmission Properties of Bone Conducted Sound: measurements in Cadaver Heads.” The Journal of the Acoustical Society of America 118 (4): 2373–2391. doi:10.1121/1.2005847.
  • Stenfelt, S., and M. Zeitooni. 2013. “Binaural Hearing Ability with Mastoid Applied Bilateral Bone Conduction Stimulation in Normal Hearing Subjects.” The Journal of the Acoustical Society of America 134 (1): 481–493. doi:10.1121/1.4807637
  • Swaminathan, J., C. R. Mason, T. M. Streeter, V. Best, G. Kidd, Jr., and A. D. Patel. 2015. “Musical Training, Individual Differences and the Cocktail Party Problem.” Scientific Reports 5 (1): 11628. doi:10.1038/srep14401.
  • Taghavi, H., B. Håkansson, S. Reinfeldt, M. Eeg-Olofsson, K. J. Jansson, E. Hakansson, and B. Nasri. 2015. “Technical Design of a New Bone Conduction Implant (BCI) System.” International Journal of Audiology 54 (10): 736–744. doi:10.3109/14992027.2015.1051665.
  • Zahorik, P., and A. M. Rothpletz. 2014. “Speech, Spatial, and Qualities of Hearing Scale (SSQ): Normative Data from Young, Normalhearing Listeners.” The Journal of the Acoustical Society of America 135: 2165.
  • Zeitooni, M., E. Maki-Torkko, and S. Stenfelt. 2016. “Binaural Hearing Ability with Bilateral Bone Conduction Stimulation in Subjects with Normal Hearing: Implications for Bone Conduction Hearing Aids.” Ear and Hearing 37 (6): 690–702. doi:10.1097/AUD.0000000000000336.