375
Views
0
CrossRef citations to date
0
Altmetric
Original Article

Manual switching between programs intended for specific real-life listening environments by adult cochlear implant users: do they use the intended program?

, , , , &
Received 12 Jan 2023, Accepted 13 Feb 2024, Published online: 06 Mar 2024

Abstract

Objective

The aim of the current study was to investigate the use of manually and automatically switching programs in everyday day life by adult cochlear implant (CI) users

Design

Participants were fitted with an automatically switching sound processor setting and 2 manual programs for 3-week study periods. They received an extensive counselling session. Datalog information was used to analyse the listening environments identified by the sound processor, the program used and the number of program switches.

Study samples

Fifteen adult Cochlear CI users. Average age 69 years (range: 57–85 years).

Results

Speech recognition in noise was significantly better with the “noise” program than with the “quiet” program. On average, participants correctly classified 4 out of 5 listening environments in a laboratory setting. Participants switched, on average, less than once a day between the 2 manual programs and the sound processor was in the intended program 60% of the time.

Conclusion

Adult CI users switch rarely between two manual programs and leave the sound processor often in a program not intended for the specific listening environment. A program that switches automatically between settings, therefore, seems to be a more appropriate option to optimise speech recognition performance in daily listening environments.

Introduction

Most of the current cochlear implant (CI) devices have the option to adjust settings to different listening environments by self-selecting one of the programs in the sound processor (manual switching), or by using a setting in which the sound processor automatically switches between programs (automatic switching). Programs for different listening environments include differences in, for instance, microphone directionality and noise reduction algorithms. Often, these programs have specific names such as “quiet,” “noise,” “cafeteria” or “music” to guide the CI user and audiologist. Even though multiple programs or automatically switching programs are fitted on a regular basis, little is known about the need for and use of these multiple programs and automatically switching devices. This followed from a scoping review in which the authors found only a small number of studies on the use of multiple programs and automatically switching hearing aids, whilst no studies were identified that concerned the use of automatically switching CI sound processors (de Graaff et al. Citation2018). A literature search with the same terms from that scoping review revealed no additional studies on this topic for CI and just one for hearing aids (Pasta et al. Citation2022, see below).

Multiple program devices allow the user to manually select a program for a specific listening environment. Previous studies indicated that only some users of multiple program devices use the possibility to switch between programs, and that users tend to leave devices in the default setting (van den Heuvel, Goverts, and Kapteyn Citation1997; Cord et al. Citation2002; Banerjee Citation2011; Searchfield et al. Citation2018). Different programs require the user to correctly identify the listening environment and subsequently select the most appropriate program in the hearing aid or CI. Recently, Pasta et al. (Citation2022) showed that, on average, hearing aid users selected a “speech in noise” program in louder, noisier and less-modulated environments compared with the “general” program. They also reported that program switches preceded sound environment change, suggesting that hearing aid users choose different programs based on the (sometimes anticipated) change in sound environment (Pasta et al. Citation2022).

With the knowledge that some users of hearing devices are not able, or willing, to manually switch between different programs, devices were introduced that could automatically switch between programs. Here, analyses of the input signal are used to classify the listening environment and to automatically choose the most appropriate processing. First, the acoustic microphone input is analysed by extracting specific features such as signal level and modulations. Subsequently, the most likely listening environment is determined based on the extracted features. Lastly, it is decided whether the current processing must be changed, based on the identified listening environment (see Mauger et al. (Citation2014) for a more extensive description of such an algorithm).

Previous research in CI users has shown that automatically switching sound processors generally improve speech recognition and listening comfort compared to manual switching sound processors (Mauger et al. Citation2014; Gilden et al. Citation2015; Wolfe et al. Citation2015; De Ceulaer et al. Citation2017). Mauger et al. (Citation2014) reported that the automatic switching setting in Cochlear CIs provided significantly better or equivalent speech recognition in several listening conditions, compared to standard programs. Similar results in larger groups of participants were reported by others (Gilden et al. Citation2015; Wolfe et al. Citation2015; De Ceulaer et al. Citation2017). More recently, Potts, Jang, and Hillis (Citation2021) evaluated four directional microphone options in adult CI users to determine which option provides the best speech-in-noise performance measured using eight loudspeakers in a 360-degree array with HINT sentences presented in restaurant noise. They reported that the best directional option varied across participants, but concluded that automatic directionality is an appropriate everyday processing option. It should be noted that these studies are all experimental studies in laboratory settings, thus the potential benefit in real-world use is not clear.

An example of an automatic switching setting in CochlearTM CIs is SCAN, which uses the SmartSound® iQ input processing technology (Mauger et al. Citation2014). The automatic scene classifier distinguishes between 6 different scenes called quiet, speech in quiet, music, speech in noise, noise and wind noise. After classification of the listening environment into one of these scenes, the most appropriate sound processor setting is activated. These settings differ in microphone directionalities: the omnidirectional microphone is activated in quiet, speech, and music scenes, a fixed directional microphone (Zoom) is activated in the noise scene, and an adaptive directional microphone (BEAM) is activated in the speech in noise scene (Mauger et al. Citation2014).

The scene identified by the automatic scene classifier and the program in use are automatically stored in the Cochlear sound processor (datalog). The datalog information provides an objective overview of the daily usage of the device, the experienced listening environments, the loudness of these listening environments, the use of different programs, and the volume and sensitivity settings (Mauger et al. Citation2014; Busch, Vanpoucke, and van Wieringen Citation2017; Busch et al. Citation2020). Ganek et al. (Citation2021) studied whether auditory environments were accurately captured by the automatic classifier from Cochlear’s™ Nucleus® 6 processor. They compared human coded day-long audio recordings collected with a Language ENvironment Analysis (LENA; LENA Research Foundation, USA) to the automatic classification from the CI processor. They reported a strong correlation and stated that the CI classification algorithm accurately reflects listening environments. Eichenauer et al. (Citation2021), however, revealed some misclassifications of the automatic classifier, especially in conditions with speech and reverberation where no specific classification could be made and the processor remained in its previous setting. Nevertheless, we think the automatic scene classifier is sufficiently accurate for the purpose of our study.

The present study was driven by the results from the scoping review (de Graaff et al. Citation2018) in which no studies were found evaluating the use of multiple programmes in CI users. The objective of the current study was to investigate the use of manually and automatically switching programs in everyday life by adult CI users. We used an extensive counselling session to achieve that CI users (1) understood the purpose of the different programs and the concept of listening environments (2) experienced the benefit of different programs in realistic listening environments and (3) understood which program was advised to be used in which listening environment. Participants were fitted with the automatically switching sound processor setting for a 3-week study period. In another 3-week period, participants had to manually switch between 2 programs with different microphone directionalities. The datalog information contained the listening environments identified by the sound processor, the program used and the number of program switches. This information was used to evaluate the manual selection of programs among CI users. Our primary interest was the number of program changes during the day and whether CI users chose the program intended for specific real-life listening environments.

Materials and methods

The study was approved by the Medical Ethics Committee of the Amsterdam University Medical Centre, location VUmc (2015.188). Participants enrolled voluntarily and provided informed consent prior to the study. All participants received a fee of €7.50 per hour spent during their visits to the clinic and were reimbursed for their travel expenses.

Study participants

Fifteen adult CI users (7 males; 8 females) participated in this study. Their average age at time of inclusion was 69 years (range: 57–85 years) and average CI experience 6.8 years. Participant characteristics are shown in . All participants were native Dutch speakers and became severely hearing impaired after the age of seven years (i.e. postlingually deafened). All CI users had at least one year of experience with their CI, and used the CochlearTM Nucleus® CP910, CP950 or CP1000 sound processor. They all had more than one program in their sound processor prior to the study. Ten participants indicated that they normally used the remote control for switching between programs, three used the processor button and two used both options.

Table 1. Demographic characteristics of the participants and the number of programs in their sound processor prior to the study.

Study design

This was a prospective study using a crossover design in which each participant served as his or her own control. Participants were randomly assigned to one of two groups. One group started with automatic switching (group A). The other group started with manual switching (group B). After 3 weeks, group A continued with manual switching and group B continued with automatic switching. The study comprised 3 visits to the clinic (). During these visits, datalog information was read out, counselling was provided (described below in more detail), and CIs were fitted with either one automatic switching program or two programs for manual switching. Participants were asked about their experiences with, and preference for, manual or automatic switching using non-validated questionnaires. Most questions used a rating scale with ten marking options between e.g. “Never” and “Always.”

Table 2. Study design.

CI fitting

First, datalog information was read out to ensure that the information stored in the sound processor was cleared and would subsequently only contain information stored during the study period. Second, participants were asked about their preferred program, sensitivity and volume settings. Settings such as Adaptive Dynamic Range Optimisation (ADRO), Autosensitivity, Signal to Noise Ratio-Noise Reduction (SNR-NR) and wind noise reduction algorithms were copied from the preferred program to the study specific program(s). Third, volume and sensitivity settings were set at the preferred level, and volume and sensitivity controls were subsequently disabled. This was done to ensure that participants could not create additional differences between programs. Finally, the sound processors were fitted with either an automatic switching program or two manual programs and the microphone covers were replaced to avoid loss in sound quality or directionality due to dirt.

Automatic switching setting

The automatic switching program used SCAN which is incorporated in current Cochlear CI sound processors. No other programs or settings were available to the participant. Participants were briefly informed about automatic switching and the fact that they could not change settings themselves.

Manual switching setting

The choice for two manual programs was based on an analysis of the differences in sound processor settings for the 6 scenes incorporated in the automatic classifier. Although the automatic classifier can distinguish between 6 scenes, the major feature which differs is the microphone directionality. The only other difference is activation of wind noise reduction in the “wind noise” setting. Because of this, and because we wanted to make the task as understandable as possible for participants, we decided to limit the number of manual programs to 2. One of the manual programs was intended to be used for listening in quiet (e.g. having a one-to-one conversation or listening to music), where an omnidirectional microphone would be appropriate. In the remainder of this paper, this program will be referred to as “quiet.” The second manual program was intended to be used for listening in noise (e.g. having a conversation in a crowded restaurant), where a directional microphone would be appropriate. This program will be referred to as “noise.” The adaptive directional microphone (BEAM) was chosen for the noise program because it is more advanced than the fixed directional microphone (Zoom), and previous research has not shown significant differences in speech recognition in noise between these 2 microphone directionalities (Mauger et al. Citation2014; Potts and Kolb Citation2014; Potts, Jang, and Hillis Citation2021) except when used with ASC and ADRO disabled (Potts and Kolb Citation2014).

The manual programs were randomly assigned to program 1 and 2 in the sound processor. Participants could switch between programs using the buttons on the sound processor or via the remote control.

Counselling and classification of listening environments by participants

Participants were extensively counselled prior to the study period with manual switching. This counselling session was identical for all participants. In this session, participants were instructed about the differences between the manual programs, differences between using an omnidirectional microphone and a directional microphone, and the effect of each program on speech recognition. It comprised a training about the purposes of different programs and microphone settings using printed statements and pictures of listening environments (Cox, Johnson, and Xu Citation2016) and loudspeaker simulations of listening environments. Participants were told which of the 2 programs was the “quiet” program and which was the “noise” one. Next, they were seated in a room surrounded by five loudspeakers. Each loudspeaker was positioned 2 m from the participant, except for the loudspeaker at 0 degrees, representing the speaker, which was positioned at 1 m (). This loudspeaker setup was chosen to be representative for common real-life listening environments. Obviously, the effect of the microphone directionality in this setup is smaller than in a setup with, for example, only noise from the side or behind the listener.

Figure 1. Loudspeaker setup for training and testing during the counselling session. The conditions with omnidirectional microphones in the counselling session represented listening environments where the “quiet” program was appropriate, and the conditions with directional microphones represented environments where the “noise” program was appropriate. The numbers between brackets refer to the loudspeakers.

Figure 1. Loudspeaker setup for training and testing during the counselling session. The conditions with omnidirectional microphones in the counselling session represented listening environments where the “quiet” program was appropriate, and the conditions with directional microphones represented environments where the “noise” program was appropriate. The numbers between brackets refer to the loudspeakers.

First, participants performed 4 digits-in-noise tests in a row, 2 tests (test and retest) with each of the manual programs. The order was balanced across participants. The digits-in-noise test is a speech-in-noise test comprising 24 digit-triplets presented in a background of steady-state noise (Smits, Goverts, and Festen Citation2013). An adaptive procedure was used to determine the speech recognition threshold (SRT), which is the signal-to-noise ratio (SNR) at which a listener correctly recognises 50% of the digit-triplets. The digits-in-noise test was used to objectify the benefit of the adaptive directional microphone for speech recognition in noise.

Second, five specific listening environments were created by using the loudspeaker setup as mentioned above and shown in . The experimenter described the listening environment (e.g. listening to music or having a conversation in a noisy environment) and explained which of the manual switching programs – “quiet” or “noise” - was most suitable for that specific listening environment. Participants received printed statements and illustrations of the example listening environments. Then, sound files were played through the loudspeakers to let participants experience the different listening environments and to give them insight in the purpose of the 2 manual programs.

Third, the participants’ ability to classify listening environments was assessed. The same listening environments as described in the previous step were presented once through the loudspeakers, but in random order, and without telling the participant which listening environment it was. Participants were asked to choose the program (program 1 or program 2) that they thought was the most appropriate for that specific listening environment. Note that participants knew which program was the “quiet” and which was the “noise” program. The number of (in)correctly chosen programs was registered.

In the final step of the counselling session, a typical speech-in-noise listening environment was created to let participants experience the difference between the “noise” program (with directional microphones) and the “quiet” program (with omnidirectional microphones) for speech recognition in noise. Here, speech was presented from the front (loudspeaker at 0 degrees) and 4 interfering talkers (Dutch male and female speakers) were presented from loudspeakers at 45, 135, 225 and 315 degrees. The speech was adjusted to a level at which the participant reported that he/she was just able to understand some of the speech. Participants were instructed to switch to the “quiet” program and, after listening for some time, were instructed to switch to the “noise” program to experience the difference in this listening environment.

After the 3-week study period with manual switching between programs, the protocol as described in step 3 was repeated. Thus, again 5 listening environments were presented in random order and the CI user had to choose the appropriate program. The number of (in)correctly chosen programs was registered and compared to the results obtained during the counselling session before the 3-week study period with manual switching.

Accuracy of the SCAN automatic classifier from the sound processor

To ascertain the quality of the classifier in the sound processor, we created a range of “real world” listening environments (quiet, speech, speech in noise, noise, and music) using the loudspeaker set up described above. Sound levels varied between 55 and 75 dB SPL and SNRs ranged between −20 dB an 20 dB. An experimenter with a CP910 sound processor behind the ear used the Nucleus® CR230 remote control to read scenes as classified by the Nucleus® CP910 sound processor. It showed that the sound processor generally classified the presented listening environments correctly. The specific scene as classified by the sound processor depended, among other things, on the level, quiet moments between the stimuli and the prior condition. This makes it impossible to clearly indicate in which situations the classifier is wrong. In one of the listening environments, we used only one interfering speaker. It showed that the classifier classified this environment very differently, but we cannot give general advice for the microphone setting here either. For the other 387 listening environments, the selected microphone setting by the sound processor matched the intended settings (omnidirectional or directional) in 97% of the environments. Therefore, we considered the selections of the automatic scene classifier to be representative for real life listening environments encountered during the 3-week study periods.

(Statistical) analyses

Datalog

A specifically developed software tool (Cochlear Technology Centre, Mechelen, Belgium), which provided more detailed information than what is available through the standard clinical CustomSound software, was used to extract all datalog information from the CustomSound database. Information that was read out included: time spent in each of the 6 listening environments classified by the automatic classifier, programs used in these listening environments, and the number of manual program changes.

The listening environments that were encountered and stored in the datalog information were aggregated into quiet listening environments (quiet, speech and music) or noisy listening environments (wind, noise and speech in noise). Note that, on average, sound processors were only 0.2% and 1.7% of the time in wind or music respectively.

Counselling

A repeated measures analyses of variance (ANOVA) was used to test for significant differences in SRT between test and retest, and between the quiet and noise programs. Statistical analyses were performed with SPSS version 26 (IBM SPSS Statistics, Inc; Armonk, NY, USA). A p-value < 0.05 was considered statistically significant.

Results

Speech recognition in noise

The repeated measures ANOVA of the DIN test results showed significantly lower mean SRTs for the retest than for test results [F(1,14)=13.73, p<0.01], which can be attributed to a learning effect (Smits, Goverts, and Festen Citation2013). The mean SRTs for the noise program (directional microphone) were significantly lower than for the quiet program (omnidirectional microphone) [F(1,14)=15.20, p<0.01]. The mean SRTs are shown in . Averaged across participants, mean SRTs were −4.0 dB SNR and −6.2 dB SNR measured with the sound processor in the quiet and noise program, respectively.

Figure 2. Violin plot showing speech recognition in noise scores (SRTs) with omnidirectional (circles) and directional (triangles) microphone settings.

Figure 2. Violin plot showing speech recognition in noise scores (SRTs) with omnidirectional (circles) and directional (triangles) microphone settings.

Classification of listening environments by participants

shows how often each participant correctly classified listening environments we created in the laboratory setting. Results from the measurements immediately after the counselling session and after the 3-week study period with manual switching are shown as separate bars. Only 2 participants correctly classified all listening environments in both sessions (#4 and #13). Overall, the average performance in the second session was almost the same as in the first session (3.9 vs 4.0 out of 5 listening environments).

Figure 3. The number of correctly classified listening environments out of 5, immediately after the extended counselling session (first bar) and after the 3-week study period of manual switching (second bar). Green represents correctly classified listening environments and red represents incorrectly classified listening environments.

Figure 3. The number of correctly classified listening environments out of 5, immediately after the extended counselling session (first bar) and after the 3-week study period of manual switching (second bar). Green represents correctly classified listening environments and red represents incorrectly classified listening environments.

Datalog information: program switches and use of the program intended for a specific listening environment

During the 3 weeks of manual switching, participants switched, on average, less than once a day between the 2 programs (i.e. an average of 20 switches in 3 weeks), with a minimum of 6 and maximum of 32 switches during the 3 weeks of manual switching ().

Figure 4. Total number of program switches in the 3-week period (left axis) and average number of program switches per day (right axis).

Figure 4. Total number of program switches in the 3-week period (left axis) and average number of program switches per day (right axis).

As mentioned before, listening environments as classified by the sound processor were aggregated into quiet listening environments (quiet, speech and music scenes) or noisy listening environments (wind, noise and speech in noise scenes). Overall, participants spent 70.1% of their time in quiet and 29.9% in noisy listening environments. shows the percentage of time that the quiet and noise programs were used in quiet and noisy listening environments. It shows that, averaged across participants, the sound processor was in the intended program 60% of the time (i.e. 45% + 15%). shows a more detailed picture of the data per participant. For each participant and for each of the two listening environments (quiet and noise), it shows the percentage of time that the sound processor was in the intended program. A binomial test indicated that in quiet listening environments, the quiet program was not used significantly more than the noise program, p = 0.304 (1-sided). Similarly, for noisy listening environments, a binomial test indicated that the noise program was not used significantly more than the quiet program, p = 0.500 (1-sided). Participants #4, #7 and #9 often used the intended program when in quiet, but they did not when in noise. The likely explanation is that these participants almost exclusively used the quiet program (i.e. 100%, 97% and 97% of the time, respectively). Note that the number of program switches are not deviating from the normal range for these participants ().

Figure 5. The percentage of time participants used the intended (green bars) or non-intended (red bars) program in quiet (left) and noisy listening environments (right).

Figure 5. The percentage of time participants used the intended (green bars) or non-intended (red bars) program in quiet (left) and noisy listening environments (right).

Table 3. Percentage of time that the quiet and noise programs were used while in quiet and noisy listening environments. In sixty percent of the time, the chosen program was in agreement with the real life listening environment.

Datalog information showed that, on average, program 1 was used 61% of the time and program 2 39% of the time. Half of the participants had the quiet program as program 1, whilst the other half had the noise program as program 1. Note that the Cochlear sound processor stays in the same program (1 or 2) after switching the sound processor off and on.

Preferred sound processor setting and questionnaire

At the end of the study, 10 out of 15 participants preferred the automatic program selection. The main reasons given by the participants were mostly related to the ease of use, not having to worry about selecting the appropriate program and not constantly being reminded of their hearing impairment. The other 5 participants (#3, #6, #7, #8 and #14) preferred the manual selection of programs. These participants reported that they wanted to be in control of the settings of their sound processor. Some felt that the automatic switching program did not always choose the appropriate setting. The average age of the participants who preferred manual selection was 68 years and their CI experience was 7 years. This was comparable to the average age and years of experience of those who preferred the automatic switching program (70 and 7 years, respectively, see ).

As mentioned previously, participants were asked to fill in a questionnaire related to their preference for either manual or automatic switching. On the statement “In some situations a different program is probably better, but for practical reasons I won’t switch anyway,” the average score was 5.4 on a 10-point scale ranging from “this never happens” to “this always happens.” The average score on the statement “My CI is sometimes in a different program than it should be” was 4.4 on a 10-point scale from “this never happens” to “this always happens.” The average score on the statement “It is clear to me which program to choose in every situation” was 6.9 on a 10-point scale from “never” to “always” and on the statement “Can you hear the difference between the programs in a quiet/noisy environment?” it was 6.8 for the quiet environment and 7.5 for the noisy environment on a 10-point scale from “no difference” to “much difference.”

Discussion

To the best of our knowledge, this study is one of the first to investigate the use of manually and automatically switching programs for various listening environments by adult CI users. In 2 study periods of 3 weeks, participants used either the automatic switching program incorporated in CochlearTM Nucleus® sound processors, or 2 manual programs. The manual programs were programmed for quiet listening environments using the omnidirectional microphone setting, and for noisy listening environments using the adaptive directional microphone setting. The datalog information containing the listening environments identified by the sound processor was used to evaluate the listening environments and appropriateness of the manual program selection by CI users in those listening environments.

The present study showed a significant benefit in terms of speech recognition when the microphone directionality is switched to a directional setting in noisy listening environments (). This should motivate CI users to switch between the intended programs and to experience the benefit in the different listening environments. The present study suggests, however, that adult CI users hardly manually select different programs in various listening environments. The datalog information showed that, on average, participants only switched approximately once a day between 2 programs. Unfortunately, the number of actual changes in listening environments per day is not known because this information is not stored in the sound processors’ datalog. It was unexpected that the number of manual program switches would be so low, especially for the participants in our study. They were instructed about the use of different programs, were aware of the study and knew that we would read out the number of switches from the sound processor. Therefore, it is to be expected that the number of program switches outside of a study context will be even lower. Although a precise comparison between our study and that of Pasta et al. (Citation2022) is not possible due to differences in study populations, methodology and analyses, their study suggests that the number of manual program switches in a large clinical group of hearing aid users is also low. They showed that from the 18,663 hearing aid users with more than 1 program, only 1,312 participants (i.e. approximately 7%) switched more than 5 times to programs other than the general program during a 4-month period. There is little literature on the actual average number of changes in listening environments per day for listeners with hearing loss. Ganek et al. (Citation2021) studied auditory environments experienced by young children with CIs. Daylong audio-recordings were segmented and coded by humans. On average, a child experienced a different auditory environment every 3 min 23s. Although figures could be different for adult CI users, it seems likely that they are more or less similar for the adults surrounding these children. Wagener, Hansen, and Ludvigsen (Citation2008) asked a group of listeners with hearing aids how often different listening situations occurred in their daily life. They reported large differences between these listeners. It is plausible that the frequency of changes in listening environments within our relatively elderly group of CI users is substantially lower compared to younger CI users. Because of the low number of switches per day, the participants mainly left their CI in one of the two programs. This is in line with previous studies showing that hearing device users tend to leave their devices in the default setting (van den Heuvel, Goverts, and Kapteyn Citation1997; Cord et al. Citation2002; Banerjee Citation2011; Searchfield et al. Citation2018). Our participants acknowledged that they didn’t always switch programs when they were supposed to, as shown by their responses to the questions asked to them after the trial periods. The analyses of the questionnaire suggests that the participants believed their CI was in the intended program much more often than what the datalog showed.

Although most CI processors allow for at least 4 manual programs, we decided to opt for a simpler setting with only 2 programs (i.e. quiet and noise). The quiet and noise programs that were used in the current study were easy to understand and refer to daily listening environments. The results of this study indicate that extensive counselling of CI users on the use of multiple programs and on how to classify listening environments, does not appear to be very effective. Even after an intensive counselling session, a substantial part of participants did not correctly classify listening environments in the laboratory setting. The classification rate remained approximately the same after 3 weeks of manual switching (). However, participants reported, being fairly confident, that, they knew what the right program was during daily use. We must realise that in daily life situations it is even more complex to classify listening environments on acoustic properties than in our laboratory setting, because then situations are dynamic (Goverts and Colburn Citation2020). However, participants may classify listening environments in real-world situations more often based on the environment or context (e.g. dining in a restaurant) than on acoustic properties alone. An additional problem in classifying listening environments is that as well as the speaker, the listener and maskers can also move, leading to varying interaural differences. Furthermore, the orientation of the head of the listener is not fixed, which influences speech recognition performance (Grange and Culling Citation2016; Grange et al. Citation2018). This means that the optimal setting (e.g. omnidirectional or directional microphone) can change frequently. We expect that an even more extensive counselling and training period is required to familiarise CI users with the classification of daily life listening environments and corresponding switches, if at all possible during standard visits.

It has been suggested that hearing aid users need to experience a clear benefit when switching between programs (Kuk Citation1996; Cord et al. Citation2002). McShefferty, Whitmer, and Akeroyd (Citation2015) demonstrated that the minimal clinically important difference for SNR improvement is 3 dB. That is, any difference between programs (e.g. a directional microphone vs an omnidirectional microphone) should provide at least 3 dB SNR improvement to provide a reliable and consistently noticeable benefit for hearing impaired listeners. In another study, the authors found that a change in SNR of 6 to 8 dB was needed to immediately motivate participants to swap devices or attend the clinic (McShefferty, Whitmer, and Akeroyd Citation2016). We created a listening environment in the counselling session to let participants experience the benefit of the directional microphone in the “noise” program. In the standard measurement setup, use of the directional microphone improved the SRT by approximately 2 dB. It is therefore likely that in many real-life listening environments, CI users will not notice a clear difference between programs in terms of speech recognition performance. Some participants mentioned not to experience a noticeable difference between the programs or they did notice a difference immediately after switching, but that disappeared after some time. Participants also reported that they spent most of their time in quiet listening environments and avoid noisy situations because of their hearing impairment, which is in accordance with the datalog information ( and ).

In summary, our results show that the number of program changes was low, that participants had problems correctly classifying listening environments and that the direct benefit in speech recognition observed when switching from one program to another was low. These findings may indicate that an automatic switching device could be advised for most patients.

Important for the validity of our study is the accuracy of the automatic classifier in the Cochlear sound processor. As described in the methods section, a lab session in which several listening environments were simulated confirmed earlier results (Ganek et al. Citation2021). Thus, it may be concluded that our approach to compare the self-selected program to the listening environment as classified by the CI processor was valid.

A limitation of the study is that we did not account for participants’ prior experiences with cochlear implants (CI). It is plausible that CI users accustomed to utilising an automatically switching program may exhibit less frequent program switching in a study setting compared to CI users habituated to manual program switching. Given the comprehensive counselling session we provided to the subjects, we assumed that participants understood the benefits of switching and were aware that that was the focus of the research. Furthermore, all participants had multiple programs in their current sound processors. Besides, in routine fitting sessions (totalling over 20 hours in the first year after implantation) with speech-language pathologists and audiologists, the use of multiple programs had already been discussed extensively. In future research, it is important to take participants’ previous experiences into account.

Conclusion

In conclusion, the adult CI users in our study rarely (on average only once a day) switch between two manual programs, “quiet” and “noise,” for different listening environments although we demonstrated a benefit of approximately 2 dB SNR when using the “noise” program in a speech-in-noise recognition task. Furthermore, CI users are not able to correctly classify all listening environments in a laboratory setting, even after intensive counselling. We compared the listening environment as classified by the automatic scene classifier and the program used by the participant, both stored in the datalog of the CI sound processor. The analyses showed that the sound processor is often (40% of the time) not in the intended program for that specific listening environment. A program that switches automatically between settings for different listening environments, therefore, seems to be a more appropriate option for most CI users when optimising speech recognition performance in daily listening environments.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Banerjee, S. 2011. “Hearing Aids in the Real World: Use of Multimemory and Volume Controls.” Journal of the American Academy of Audiology 22 (6):359–374. https://doi.org/10.3766/jaaa.22.6.5
  • Busch, T., F. Vanpoucke, and A. van Wieringen. 2017. “Auditory Environment Across the Life Span of Cochlear Implant Users: Insights From Data Logging.” Journal of Speech, Language, and Hearing Research: JSLHR 60 (5):1362–1377. https://doi.org/10.1044/2016_JSLHR-H-16-0162
  • Busch, T., A. Vermeulen, M. Langereis, F. Vanpoucke, and A. van Wieringen. 2020. “Cochlear Implant Data Logs Predict Children’s Receptive Vocabulary.” Ear and Hearing 41 (4):733–746. https://doi.org/10.1097/AUD.0000000000000818
  • Cord, M. T., R. K. Surr, B. E. Walden, and L. Olson. 2002. “Performance of Directional Microphone Hearing Aids in Everyday Life.” Journal of the American Academy of Audiology 13 (06):295–307. https://doi.org/10.1055/s-0040-1715973
  • Cox, R. M., J. A. Johnson, and J. Xu. 2016. “Impact of Hearing Aid Technology on Outcomes in Daily Life I: The Patients’ Perspective.” Ear and Hearing 37 (4):e224–e237. https://doi.org/10.1097/AUD.0000000000000277
  • De Ceulaer, G., D. Pascoal, F. Vanpoucke, and P. J. Govaerts. 2017. “The Use of Cochlear’s SCAN and Wireless Microphones to Improve Speech Understanding in Noise with the Nucleus6(R) CP900 Processor.” International Journal of Audiology 56 (11):837–843. https://doi.org/10.1080/14992027.2017.1346305
  • de Graaff, F., E. Huysmans, J. C. F. Ket, P. Merkus, S. T. Goverts, C. R. Leemans, and C. Smits. 2018. “Is There Evidence for the Added Value and Correct Use of Manual and Automatically Switching Multimemory Hearing Devices? A Scoping Review.” International Journal of Audiology 57 (3):176–183. https://doi.org/10.1080/14992027.2017.1385864
  • Eichenauer, A., U. Baumann, T. Stöver, and T. Weissgerber. 2021. “Interleaved Acoustic Environments: Impact of an Auditory Scene Classification Procedure on Speech Perception in Cochlear Implant Users.” Trends in Hearing 25:23312165211014118. https://doi.org/10.1177/23312165211014118
  • Ganek, H., D. Forde-Dixon, S. L. Cushing, B. C. Papsin, and K. A. Gordon. 2021. “Cochlear Implant Datalogging Accurately Characterizes Children’s 'auditory Scenes.” Cochlear Implants International 22 (2):85–95. https://doi.org/10.1080/14670100.2020.1826137
  • Gilden, J., K. Lewis, G. Grant, and J. Crosson. 2015. “Improved hearing in noise using new signal processing algorithms with the Cochlear Nucleus((R)) 6 sound processor.” Journal of Otology 10 (2):51–56. https://doi.org/10.1016/j.joto.2015.09.001
  • Goverts, S. T., and H. S. Colburn. 2020. “Binaural Recordings in Natural Acoustic Environments: Estimates of Speech-Likeness and Interaural Parameters.” Trends in Hearing 24:2331216520972858. https://doi.org/10.1177/2331216520972858
  • Grange, J. A., and J. F. Culling. 2016. “Head Orientation Benefit to Speech Intelligibility in Noise for Cochlear Implant Users and in Realistic Listening Conditions.” The Journal of the Acoustical Society of America 140 (6):4061–4072. https://doi.org/10.1121/1.4968515
  • Grange, J. A., J. F. Culling, B. Bardsley, L. I. Mackinney, S. E. Hughes, and S. S. Backhouse. 2018. “Turn an Ear to Hear: How Hearing-Impaired Listeners Can Exploit Head Orientation to Enhance Their Speech Intelligibility in Noisy Social Settings.” Trends in Hearing 22:2331216518802701. https://doi.org/10.1177/2331216518802701
  • Kuk, F. 1996. “Subjective Preference for Microphone Types in Daily Listening Environments.” Hear J 49:29–38.
  • Mauger, S. J., C. D. Warren, M. R. Knight, M. Goorevich, and E. Nel. 2014. “Clinical Evaluation of the Nucleus (R) 6 Cochlear Implant System: Performance Improvements with SmartSound iQ.” International Journal of Audiology 53 (8):564–576. https://doi.org/10.3109/14992027.2014.895431
  • McShefferty, D., W. M. Whitmer, and M. A. Akeroyd. 2015. “The Just-Noticeable Difference in Speech-to-Noise Ratio.” Trends Hear 19:1–9. https://doi.org/10.1177/2331216515572316
  • McShefferty, D., W. M. Whitmer, and M. A. Akeroyd. 2016. “The Just-Meaningful Difference in Speech-to-Noise Ratio.” Trends in Hearing 20:1-11. https://doi.org/10.1177/2331216515626570
  • Pasta, A., T.-I. Szatmari, J. H. Christensen, K. J. Jensen, N. H. Pontoppidan, K. Sun, and J. E. Larsen. 2022. “Investigating the Provision and Context of Use of Hearing Aid Listening Programs From Real-world Data: Observational Study.” Journal of Medical Internet Research 24 (10):e36671-e36671. https://doi.org/10.2196/36671
  • Potts, L. G., and K. A. Kolb. 2014. “Effect of Different Signal-Processing Options on Speech-in-Noise Recognition for Cochlear Implant Recipients with the Cochlear CP810 Speech Processor.” Journal of the American Academy of Audiology 25 (4):367–379. https://doi.org/10.3766/jaaa.25.4.8
  • Potts, L. G., S. Jang, and C. L. Hillis. 2021. “Evaluation of Automatic Directional Processing with Cochlear Implant Recipients.” Journal of the American Academy of Audiology 32 (8):478–486. https://doi.org/10.1055/s-0041-1733967
  • Searchfield, G. D., T. Linford, K. Kobayashi, D. Crowhen, and M. Latzel. 2018. “The Performance of an Automatic Acoustic-Based Program Classifier Compared to Hearing Aid Users’ Manual Selection of Listening Programs.” International Journal of Audiology 57 (3):201–212. https://doi.org/10.1080/14992027.2017.1392048
  • Smits, C., S. T. Goverts, and J. M. Festen. 2013. “The Digits-in-Noise Test: Assessing Auditory Speech Recognition Abilities in Noise.” The Journal of the Acoustical Society of America 133 (3):1693–1706. https://doi.org/10.1121/1.4789933
  • van den Heuvel, J., S. T. Goverts, and T. S. Kapteyn. 1997. “Evaluation of Fitting Rules with a Programmable Hearing Aid.” Audiology: official Organ of the International Society of Audiology 36 (5):261–278. https://doi.org/10.3109/00206099709071979
  • Wagener, K. C., M. Hansen, and C. Ludvigsen. 2008. “Recording and Classification of the Acoustic Environment of Hearing Aid Users.” Journal of the American Academy of Audiology 19 (4):348–370. https://doi.org/10.3766/jaaa.19.4.7
  • Wolfe, J., S. Neumann, M. Marsh, E. Schafer, L. Lianos, J. Gilden, L. O'Neill, P. Arkis, C. Menapace, E. Nel, et al. 2015. “Benefits of Adaptive Signal Processing in a Commercially Available Cochlear Implant Sound Processor.” Otology & Neurotology: official Publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology 36 (7):1181–1190. https://doi.org/10.1097/MAO.0000000000000781