3,308
Views
22
CrossRef citations to date
0
Altmetric
Original Articles

Hearing aid experience and background noise affect the robust relationship between working memory and speech recognition in noise

ORCID Icon & ORCID Icon
Pages 208-218 | Received 18 Nov 2018, Accepted 03 Oct 2019, Published online: 06 Dec 2019

Abstract

Objective: The aim of this study was to examine how background noise and hearing aid experience affect the robust relationship between working memory and speech recognition.

Design: Matrix sentences were used to measure speech recognition in noise. Three measures of working memory were administered. Study sample: 148 participants with at least 2 years of hearing aid experience.

Results: A stronger overall correlation between working memory and speech recognition performance was found in a four-talker babble than in a stationary noise background. This correlation was significantly weaker in participants with most hearing aid experience than those with least experience when background noise was stationary. In the four-talker babble, however, no significant difference was found between the strength of correlations between users with different experience.

Conclusion: In general, more explicit processing of working memory is invoked when listening in a multi-talker babble. The matching processes (cf. Ease of Language Understanding model, ELU) were more efficient for experienced than for less experienced users when perceiving speech. This study extends the existing ELU model that mismatch may also lead to the establishment of new phonological representations in the long-term memory.

Introduction

The relationship between speech recognition threshold (SRT) and working memory capacity (WMC) has been well-established (denoted as WMC–SRT relationship in this article), and has been found robust in listeners with and without hearing loss (e.g. Petersen et al. Citation2016; Gordon-Salant and Cole Citation2016; Ng et al. Citation2014; P. Souza and Arehart Citation2015); more examples, (Gatehouse and Gordon Citation1990; Davis Citation2003; Gatehouse, Naylor, and Elberling Citation2003; Lunner Citation2003; Foo et al. Citation2007; Lunner and Sundewall-Thorén Citation2007; Moore Citation2008; Rudner et al. Citation2009; Rudner, Rönnberg, and Lunner Citation2011; Picou, Ricketts, and Hornsby Citation2013; Dryden et al. Citation2017). However, the general picture may have to be nuanced after a recent meta-analysis by (Füllgrabe and Rosen Citation2016). They found that only a few percent of the variance was accounted for by WMC in normal hearing listeners in the data sets they had access to, but that the dependence on WMC was higher in older listeners.

The Ease of Language Understanding model (ELU; Rönnberg Citation2003; Rönnberg et al. Citation2008; Rönnberg et al. Citation2013) was developed to understand the interplay between individual differences in speech recognition and WMC. Speech recognition has typically been assessed by matrix sentences presented in some kind of noise and WMC has often been measured by the reading span test (Akeroyd Citation2008; Füllgrabe and Rosen Citation2016; Rönnberg et al. Citation2016). In the ELU model, an episodic buffer was postulated whose function is to Rapidly, Automatically, and Multimodally Bind perceived input into a PHOnological representation, referred to as RAMBPHO. Then the sub-lexical RAMBPHO information is assumed to be matched with the phonological representations in semantic long-term memory (LTM) such that the lexicon is unlocked once matching is successful. The ELU model predicts the engagement of WMC in different listening situations. In easy listening situations, the incoming signal would be readily matched with the phonological representations in the lexicon, and speech processing is implicit, automatic and effortless. In adverse situations, a mismatch may occur, and explicit and effortful processing is involved. Here, working memory plays a particularly important role when there is a mismatch between the incoming signal and the phonological representations in semantic LTM. The matching process between the input signals and representations in LTM, which is dependent on the quality of the incoming and stored signals, is crucial to successful language understanding. When phonological mismatch occurs explicit working memory processing is triggered, and working memory-based inference-making and repair is instigated.

Based on the ELU model, mismatch could be triggered by suboptimal signals or adverse listening conditions, including listening with a hearing impairment, signal processing in a hearing instrument, or noisy conditions. In these conditions, the probability that the extraction of RAMBPHO information fails to match stored representations increases because the incoming signal for matching is distorted and/or suboptimal. In addition to these external factors concerning the quality of the incoming signal, mismatch may also occur when lexical access is slow, or when phonological representations in LTM are not sufficiently precise or stable as a result of long-term severe hearing loss (e.g. Andersson Citation2002; Rönnberg et al. Citation2008; Citation2011; Citation2014).

Below we describe in some more detail how cognitive abilities are found to be related to the processing of suboptimal signals based on hearing loss, hearing aid signal processing and noise.

Hearing loss

Hearing loss distorts speech signal along the auditory pathway. Based on the type of hearing loss (conductive, sensorineural or mixed), the incoming signal may be distorted in the corresponding part of the auditory system. For instance, conductive hearing loss affects mostly the audibility of the signal because of poor conductance in the outer and middle ear. Greater degrees of sensorineural hearing loss may lead to poor supra-threshold spectro-temporal processing, such that the incoming signal cannot be optimally coded in the inner ear. In these cases, the signal is distorted in the peripheral auditory system.

Hearing loss does not only impair the quality of the incoming signal, but may also change the quality of phonological representations in LTM for people who suffer from severe hearing loss (Andersson and Lyxell Citation1998; Andersson Citation2002). The authors argue that this is presumably because of prolonged and repeated exposures to imprecise incoming signals or long-term deprivation of a clear and stable signal, which in turn would create less precise or less stable phonological representations in the long run. Thus, reduced hearing acuity may cause a deterioration of stored representations in LTM, which in effect would result in a mismatch condition.

A previous study confirmed that phonological processing declines in people with moderate-to-profound hearing loss. However, such a decline in phonological processing can be compensated by WMC and explicit working memory processing of phonology (Classon et al. Citation2013). In other words, the effect of mismatch is further modulated by individual differences in WMC and the kind of phonological processing in these listeners.

Signal processing

Signal processing in hearing aids is primarily designed to improve speech intelligibility. For instance, a noise reduction algorithm is designed to improve signal-to-noise ratio (SNR) by attenuating background noise. While processed speech becomes more audible, the resulting speech-in-noise signal is usually distorted and does not sound as natural as unprocessed speech. A few recent studies investigated how working memory modulates intelligibility of distorted or signal-modified speech. Although some studies did not find an association of WMC and signal-modified speech or found conflicting results (e.g. Neher et al. Citation2014, Neher, Wagener, and Fischer Citation2018), some other studies concluded positive associations between WMC and reception of processed speech. These studies include Arehart et al. (Citation2015), which showed that the variability of speech intelligibility performance of signal-modified speech is explained by WMC in addition to hearing sensitivity in listeners with moderate high-frequency hearing loss. Souza et al. (Citation2015) reported that speech intelligibility performance was poorer in listeners with low WMC than those with high WMC as the fidelity of the hearing-aid processed speech decreased. These findings are in line with Rönnberg et al. (Citation2013), which states that based on the ELU model, additional WMC is important for listeners to understand hearing aid processed speech when matching supposedly is disrupted by low fidelity of the signal. Wagner, Toffanin, and Başkent (Citation2016) studied the impact of degraded speech on lexical access, and they found that lexical processing was more time consuming and effortful when speech was degraded than natural speech. In other words, the deployment of cognitive resources in speech understanding increases as the fidelity of the input signal decreases. Other research has also indicated that a degraded auditory signal would have an impact on working memory processing, such that encoding and consequent storage into LTM becomes less efficient (Sörqvist and Rönnberg 2012; Thiel et al. Citation2016). In summary, empirical findings suggest that listening to distorted speech or speech with poor fidelity may require more cognitive capacity in order to be able to make sense of the incoming speech signal.

Noise

A mismatch condition may arise when the incoming speech signal is masked by noise. Background noise with and without linguistic context, which are referred to as informational and energetic masking, respectively, has been frequently used and compared in research studies (see Mattys, Brooks, and Cooke Citation2009 for review). Noise has a detrimental effect on speech understanding, and when background noise contains interfering linguistic information, the SRT is further impacted. This is referred to as informational masking (Pollack Citation1975). Compared to energetic masking, informational masking is assumed to invoke more explicit processing because additional resources would be needed for stream segregation and inhibition of interfering linguistic information (Mattys, Brooks, and Cooke Citation2009). This is generally indicated by the robust relationship between speech recognition in speech maskers and WMC (see Besser et al. Citation2013 for a review).

Diverse results have been obtained in studies of hearing aid acclimatisation in terms of SRT. Ng et al. (Citation2014) studied the WMC–SRT relationship during the acclimatisation period, and demonstrated that the WMC–SRT relationship declined over the first 6 months of hearing aid use in new users. The authors argued that the demand on explicit processing resources shrunk as the listeners became accustomed to the hearing aid processed signals from their own hearing aids. One possible underlying mechanism is that long-term phonological representations that are congruent to the new (hearing aid processed) input were being established or recalibrated after a period of constant exposure, and that these newly established representations could be matched more readily with the incoming (degraded) speech signal. By comparing the correlations between WMC and SRT in noise obtained during the first hearing aid fitting session and 6 months post-fitting, the authors also empirically demonstrated that the relationship weakened as the users were more accustomed to their hearing aids.

A recent study by Rählmann et al. (Citation2018) examined the relationship between working memory and speech recognition in speech-shaped stationary noise (at 80% intelligibility level) in experienced hearing aid users. This was a cross-sectional comparison of the WMC–SRT relationship in hearing aid users with different hearing aid experience. They found that in the aided test condition using a master hearing aid (Grimm et al. Citation2006), this relationship significantly declined for listeners with most hearing aid experience (7 years or more) as compared with those with least hearing aid experience (not more than 3.5 years). The correlation was non-significant for listeners with the most experience. Although the hearing aid signal processing used in the study was not the same signal processing as in their own hearing aids, Rählmann et al. found a similar change or weakening of the WMC–SRT relationship as hearing aid users become more experienced as observed in the Ng et al. (Citation2014) study. It is important to note, however, that the nature of the change of the WMC–SRT relationship was different in these two studies due to the methodological differences. The weakening of the relationship in Ng et al. was driven by the improved speech recognition performance over 6 months in first-time hearing aid users where WMC was assumed to be the same over time, while the weakening observed in the Rählmann et al. study examined the difference of the WMC–SRT relationship due to long-term hearing aid experience beyond acclimatisation. Both WMC and SRT were obtained at the time of testing. Nevertheless, the building up of new phonological representations is believed to be one of the common mechanisms behind the change in both types of design. In general, then, these results suggest that the expected change of this WMC–SRT relationship may not be constrained to one specific fitting formula or kind of signal processing. Departing from their results, it could be hypothesised that as new long-term representations become established in the lexicon based on hearing aid processed speech input, and these representations can be matched more readily with the processed speech signals (cf. Foo et al. Citation2007; Rudner et al. Citation2009; Rudner, Rönnberg, and Lunner Citation2011). This hypothesis represents a learning and adaptation mechanism akin to the mechanism postulated by the developmental ELU model (i.e. the D-ELU model; Holmer, Heimann, and Rudner Citation2016).

Thus, if phonological representations can be redefined or established over time based on the properties of the incoming signal, this presumably leads to less involvement of explicit processing in working memory according to the ELU model. In other words, less cognitive resources would be required for speech recognition in noise (i.e. a weakening of the WMC–SRT relationship) as listeners become more experienced with hearing aids. Therefore, we hypothesised that, based on the results reported by Rählmann et al. (Citation2018), the strength of this WMC–SRT relationship is dependent on the experience with hearing aids irrespective of signal processing in their own hearing aids.

Current study

In both Rählmann et al. (Citation2018) and Ng et al. (Citation2014), results were obtained in stationary noise only. In the current study, we compared the differential effect of background noise on the change of the WMC–SRT relationship with a larger sample size. Rönnberg et al. (Citation2016) analysed the overall cognitive patterns in relation to hearing and outcome variables and found that four-talker babble (4 T) has a stronger association with speech recognition performance than speech-shaped noise (SSN). This observation is probably due to the fact that 4 T contains linguistics information which requires additional processing resources to segregate different auditory streams and to focus attention on the target talker as compared to SSN (Mattys, Brooks, and Cooke Citation2009). Therefore, we hypothesised a stronger overall WMC–SRT relationship for 4 T. Further, this study looks into the cross-sectional change of this relationship with hearing aid use experience. To examine whether hearing aid experience has an impact on the WMC–SRT relationship in these two types of background noise, the participants were divided into different groups based on their reported number of years of hearing aid use. Relevant variables or cognitive skills that could have affected the WMC–SRT relationship, such as self-reported daily hearing aid usage, education, and verbal information processing skills, were also compared amongst the groups.

A large body of literature has demonstrated a robust relationship between speech recognition in noise and cognitive measures in experienced hearing aid users. Among all cognitive measures used, WMC has been found to best predict speech recognition in noise, and reading span has been commonly used as a measure of WMC. A battery of working memory tests employed in Rönnberg et al. (Citation2016), including the reading span test, a semantic word-pair test and a visual spatial working memory test (VSWM) (see “Methods” section for detail) were included in the current study.

In all, this study therefore aimed to compare the WMC–SRT relationship obtained using each of these working memory measures and to find out if these measures could predict SRT in different background noises and with different hearing aid experience.

Methods

Participants

The data presented in this study is part of the database collected from two hundred hearing aid users (Rönnberg et al. Citation2016). The participants had bilateral, symmetrical mild to severe sensorineural hearing loss (see Figure 1 in Rönnberg et al. Citation2016 for the average hearing thresholds) with four-frequency pure-tone threshold average across 0.5, 1, 2 and 4 kHz, and were experienced hearing aid users who were randomly selected from the patient population at the University Hospital of Linköping in Sweden. Only participants who had already used hearing aids for more than 2 years were included in this study. This restriction was employed to ensure that all participants had been well acclimatised to their hearing aids at the time of the study. The study was approved by the regional ethics committee. All participants gave informed consent. All participants had normal or corrected-to-normal vision.

General procedure

In this study, a subset of the tests administered in Rönnberg et al. (Citation2016) were included. These tests assess WMC, general verbal information processing skills and speech recognition in noise.

Working memory

All tests included in this analysis are complex working memory tests. These are all complex span tests which were used to measure the ability to process and to store verbal information simultaneously (see Rönnberg et al. Citation2016 for detail). These tests are all visually based and involve a processing component (which is usually measured using a judgement task) and a storage component (which is usually measured using a recall task). These tasks have been found to predict performance on challenging linguistic tasks such as language comprehension and speech perception. The main difference between these working memory tests is on the judgement task. While the reading span test involves a linguistic judgement at the sentence level, the semantic word-pair test requires judgement based on the meaning of words. VSWM does not involve any linguistic judgement, but a judgement about the similarity of two shapes shown on the screen.

Reading span test. The participants were told to judge whether each sentence in a list was sensible or absurd (Baddeley et al. Citation1985). After each list of sentences, they were prompted to recall either the first or the final word of the sentences in the list. The three-word sentences were presented word-by-word at a rate of 800 ms per word with an inter-stimulus interval of 75 ms. Lists of two, three, four and five sentences were presented in ascending order of length, and two lists were presented at each list length. A total of 28 sentences were presented. The test was scored by the total number of items correctly recalled irrespective of recall order.

Semantic word-pair test. The test consists of lists of word-pairs in Swedish (for example “Uggla” – “Sirap” in Swedish, meaning “Owl” – “Syrup”), with a list length that varied from two to five, with three trials per length. A total of 42 word pairs were presented. The task was to (1) judge which one of the two words presented side-by-side on the screen is a living object. Each word pair remained on the screen until a response had been given; and (2) orally recall the words presented on either the left- or right-hand side in the list. This test, which is similar to the reading span test, involves simultaneous semantic processing and storage of verbal information with the difference being that the test does not include any syntactic elements. The test was scored by the total number of items correctly recalled irrespective of serial order.

VSWM. In each trial, a 5 × 5 grid was presented on the screen. A pair of ellipses, of either identical or different shapes, were presented in one square in the grid. The participants were asked to judge whether the ellipses were identical or not. Once a response was given, another pair of ellipses was shown in another square in the grid, until the end of the list was reached. List length varied from two to five pairs, with three trials per length. A total of 42 trials were administered. After the last pair of ellipses in a list were presented and judged, the participants were asked to write on a paper with an empty 5x5 grid the squares in which the ellipses were presented in the correct order. As for the other working memory tasks, performance was scored by the total number of squares (where the ellipses had been presented) correctly recalled.

Physical matching

The task was to judge whether the two tokens of the same letter shown on the screen were identical in physical shape (for example, A–A, but not A–a) during a 5000-ms interval. A total of 16 trials were administered. This test measures general processing speed (Posner and Mitchell Citation1967) and was scored by the average response time.

Lexical decision

The test materials of the lexical decision test (Rönnberg Citation1990) consists of sequences of three letters that were presented on the screen. The participants’ task was to judge whether each string of three letters constituted a real word or not. Forty items were used of which half were common Swedish three-letter words. The participants’ task was to respond “yes” (for a real word) or “no” (for a non-word) during a 5000-ms interval before the presentation of the next word. The task was scored by the average response time.

Rhyme judgment

The participants were required to judge whether two words shown on the screen rhymed or not (Baddeley and Wilson Citation1985). In this test, there were four experimental conditions: the words 1) rhymed and were orthographically similar (e.g. “fritt”-“vitt”, which means “free”-“white”), 2) rhymed but were orthographically dissimilar (e.g. “dags”-“lax”, which means “time”-“salmon”), 3) did not rhyme but were orthographically similar (e.g. “salt”-“saft”, which means “salt”-“juice”), and 4) did not rhyme and were orthographically dissimilar (e.g. “kalk”-“stol”, which means “lime”- “chair”). Thirty-two word pairs were presented and half of them rhymed (conditions 1 and 2). This test measured the quality of the phonological representations in the lexicon (Lyxell Citation1994) and was scored based on percentage correct.

Speech recognition test

The Hagerman sentences were used to determine SRTs, which is the SNR yielding 50% and 80% speech intelligibility, using an adaptive test procedure (Hagerman and Kinnefors Citation1995). This test uses a set of materials that are often referred to as matrix sentences, and the corpus consisted of 10 lists of 10 low redundancy five-word sentences. All sentence stimuli were presented in noise. The stimuli were presented binaurally through a pair of insert earphones. The participants were told to verbally repeat the sentence. The adaptive procedure was based on the number of words correctly repeated in each sentence (between zero and five). The sentences were initially presented at 65 dB SPL (C-weighted equivalent level), and the noise level was adjusted upward or downward for each subsequent sentence according to participants’ response. There is no change of noise level when the score is 2 (two correct words in a sentence). If the score is below 2 (one and zero correct word), the noise level of the following sentence is decreased (by 1 and 2 dB, respectively). If the score is above 2 (three, four, and five correct words repeated), the noise level is increased (by 1, 2, and 3 dB, respectively). Three sentence lists were administered for each test conditions. Speech recognition performance at each speech intelligibility level was calculated based on the average SNR across the sentences presented.

There were two conditions of background noise (4 T and SSN) and three types of hearing aid signal processing (see Rönnberg et al. Citation2016 for detail). The processed signals were presented via an experimental hearing aid. The SSN used in this study had a long-term frequency spectrum that is identical to that of the Hagerman sentences (Hagerman Citation1982). There were three signal processing conditions, linear amplification with no noise reduction, linear amplification with noise reduction, and fast compression with no noise reduction. Since signal processing and speech intelligibility level were not the focus of the present study, SRTs in the following analyses were calculated based on the average of the three signal processing conditions at both 50% and 80% speech intelligibility levels for each of the two background noises.

Statistical analyses

A weighted score of working memory was calculated for each participant. The three working memory measures are rather similar in terms of overall design and test procedure, in which all involve a processing and a storage component. We expected that these three measures should give a one-factor solution in a principal component analysis. In order to reduce the number of working memory measures and comparisons for the subsequent correlation analyses, and to check if the three tests would load onto one factor, a principal component analysis was thus performed and a weighted scored was obtained. Then, we divided the entire group of participants into three subgroups based on the time-window of their experience with a hearing aid in order to investigate the putative change of the WMC–SRT relationship. Correlational analyses were performed to investigate this relationship under different test conditions for users with different hearing aid experience. As in the previous studies (Ng et al. Citation2014; Rählmann et al. Citation2018), the correlation coefficients obtained were compared using Fisher r-to-z transformation in order to assess the change in the relative strength of the WMC–SRT relationship across subgroups. Finally, multiple regression analyses were performed to test the engagement of WMC in SRT in the subgroups.

Results

A total of 148 participants (out of 200) were included in the following analyses. Reasons for exclusion were: Twenty-four participants had less than 2 years of hearing aid experience. Furthermore, twenty-eight participants were excluded because of one or more of the following reasons: the duration of hearing aid use was not reported, they produced unreliable results in the speech recognition test when the slope of the individual performance curve between the 50% and 80% levels of performance was less than 2%, (cf. Foo et al. Citation2007) and/or existence of outlier data points (4 SDs above or below mean).

Prior to further statistical analyses, the data from the working memory tests were transformed into z-scores. A weighted score of working memory was calculated for each participant. As expected, one factor with eigenvalues greater than 1 was obtained, explaining 58.85% of shared variance [KMO: 0.66, Barlett: x2 (3) = 74.57, p < 0.0001] with all three tests having a highly similar factor loading (0.78, 0.77 and 0.75 for reading span, semantic word-pair and VSWM, respectively). The weighted score was calculated based on the factor loading.

The impact of the duration of hearing aid use on the WMC–SRT relationship

To investigate the putative change of the WMC–SRT relationship in users with different duration of hearing aid use experience, we divided the entire group of participants into three subgroups based on the time-window of their experience with a hearing aid: 2–5 years, 5–10 years, and 10 years or above. These subgroups are of comparable sizes and with large differences in duration of hearing aid experience so that we were able to examine the unique build-up of representations in LTM on a longer time scales in experienced hearing aid users. shows the background variables and test results of every group. The three subgroups 2–5 years, 5–10 years, and 10 years or above had 66, 40 and 42 participants, respectively. Importantly, one-way ANOVAs showed that these three subgroups did not differ in terms of number of years of education, daily hearing aid usage, rhyme judgement, lexical access, physical matching and all working memory measures (reading span, semantic word-pair, and VSWM), F(2, 145) = 0.06 to 2.46, ps > 0.05. This allows a fair comparison of the WMC–SRT relationship across subgroups.

Table1. Means (and SDs) of background variables, Hagerman and working memory tests.

For obvious reasons, age and PTA differed between groups, F(2, 145) = 3.45 and 24.67, respectively, p < 0.05. The mean age of the 2–5 years group (59.68 years, SD 7.12) was significantly lower than that of the 10 years or above group (63.43 years, SD 8.27), and the 5–10 years group (62.30 years, SD 7.69) did not differ from either of the groups. The mean PTA of the 2–5 years group (34.77 dB HL, SD 9.49) was significantly lower than that of the 5–10 years (39.44 dB HL, SD 7.69), which was significantly lower than that of the 10 years or above group (47.62 dB HL, SD 10.24). Results of the Hagerman sentence test also differed between groups, such that the average dB SNRs of the 2–5 years group was significantly lower than that of 5–10 years and the 10 years or above group, F(2, 145) = 10.92 and 7.43 in SSN and 4 T, respectively, p < 0.001.

Correlational analyses were performed to investigate the relationship between the WMC measures and SRT under different test conditions. Partial correlations were also calculated to control for the effect of age and hearing sensitivity. These partial correlations show a similar pattern to the non-partialled ones and hence not used in the subsequent analyses. Differences between correlation strength obtained in SSN and 4 T were analysed using the Fisher r-to-z transformation. The overall correlation between the weighted score of WMC and SRT in 4 T (r = −0.37) was significantly stronger than that in SSN (r = −0.26), z = 2.39, p < 0.05. Results are shown in .

Table 2. Correlations between speech recognition thresholds obtained in SSN and 4 T and individual working memory tests together with the weighted score of WMC.

For each background noise, the correlations within the three subgroups were also recalculated, including the three individual measures and the weighted score of working memory. The difference between the strength of correlations obtained in SSN and 4 T within each subgroup was analysed. None of the differences were significantly different, z < 1.77, p > 0.05. These results are shown in .

The weighted score of working memory was used in this correlation analysis. Differences between correlation strength obtained at 2–5 years, 5–10 years and 10 years or above within each type of background noise were analysed. The difference between the strength of the correlation was significant in SSN with the 2–5 years (r = −0.44) and 10 years or above group (r = −0.06), z = 2.02, p < 0.05. The correlation of the 5–10 years group did not differ from the other two groups. The strength of the relationship did not significantly differ between the 2–5 years (r = −0.47) and the 10 years and above group (r = −0.25) for the 4 T conditions, z = 1.25, p = 0.21, even though the correlations were statistically significant for both 2–5 years and 5–10 years groups and not for the 10 years or above group.

Multiple regression analyses

Based on Ng et al. (Citation2014), where a battery of cognitive tests including general processing speed, lexical access speed and WMC measure were used, only PTA and WMC emerged as significant predictors of SRT. Therefore, in this study, we further investigated the engagement of WMC in SRT in the three subgroups, using multiple regression analyses. Two sets of regression analyses were performed for SRT in each type of noise (SSN and 4 T). Both sets included age and PTA and the three individual working memory tests as predictor variables (i.e. in total five variables were entered). A total of six regression models using backward elimination method were computed. All except one regression model (SSN, the 10 years or above group) was statistically significant (). For SSN, besides PTA, reading span emerged as the a significant working memory predictor for the regression model for the 2–5 years group, F(2, 63) = 22.44, p < 0.001. None of the working memory predictors was significant for the 5–10 years group, F(1, 38) = 17.76, p < 0.001. The regression model for the 10 years or above group was not, but approaching, statistical significance F(1, 40) = 3.61, p = 0.07, and age emerged as the single predictor in this model. displays both the full regression models and the best models which are statistically significant. Overall, the changes in R square ranged between −0.002 and −0.04, which were all non-significant (p > 0.05).

Table 3. Results of the regression analyses (backward elimination).

For 4 T, semantic word-pair, together with PTA, emerged as significant predictors for 2–5 years, F(2, 63) = 19.74, p < 0.001. In the model for 5–10 years, F(2, 37) = 12.27, p < 0.001, there were two significant predictors, including age and reading span. Reading span, together with PTA, also emerged as significant predictors in the model for 10 years or above, F(2, 39) = 5.27, p < 0.01. Comparing the best and full regression models, the overall changes in R2 were small (ranged between −0.024 and −0.053) and not significant (p > 0.05).

In summary, WMC (measured using reading span) emerged as a significant predictor only when noise is SSN in listeners with relatively less hearing aid experience (2–5 years). Together with the results of the correlation comparison and regression results, there was a clear weakening of the WMC–SRT relationship for SSN. For 4 T, WMC (measured using the semantic word-pair or reading span tests) consistently emerged as significant predictors in all three groups of hearing aid experience in the regression models. Putting the results of all statistical analyses together, the weakening of such pattern does not seem to be as clear in 4 T. For 10 years or above, the regression model was significant, despite of a small yet significant adjusted R2 value, while the model for SSN did not reach statistical significance. Moreover, the correlation comparison between subgroups reached statistical significance in SSN but not 4 T. Based on these results, the relationship for 4 T showed a similar trend, but the weakening pattern seems to be more gentle as compared with SSN.

Discussion

This study investigated the WMC–SRT relationship in two types of background noise: a stationary noise and a multi-talker babble, and how this relationship changes with the duration of hearing aid experience. The results showed that this relationship is stronger in multi-talker babble than in stationary noise. Expectedly, the strength of this relationship has been found to be significantly weaker in hearing aid users with most extensive experience than with relatively less experience in stationary background noise. A similar, yet more gentle, pattern of weakening of the WMC–SRT relationship is observed for the multi-talker babble condition. Thus, the WMC–SRT relationship appears to be modulated by both background noise and hearing aid experience.

Investigation of the WMC–SRT relationship in different types of background noise

This study clearly demonstrated that WMC correlated with speech recognition in noise for listeners with hearing loss. This is in agreement with the literature (e.g. Lunner Citation2003; Foo et al. Citation2007; Lunner and Sundewall-Thorén Citation2007; Moore Citation2008; Rudner et al. Citation2009; Rudner, Rönnberg, and Lunner Citation2011; Picou, Ricketts, and Hornsby Citation2013; Ng et al. Citation2014; Souza and Arehart Citation2015; Petersen et al. Citation2016). The WMC–SRT association is generally stronger in the multi-talker babble than in the stationary noise. This is in line with other independent studies (e.g. Neher et al. Citation2009; Desjardins and Doherty Citation2013). Rönnberg et al. (Citation2016) also showed that the multi-talker babble has a stronger masking effect than the stationary noise. Linguistic information and amplitude envelopes are two key attributes that differentiate these two types of noise. In the multi-talker babble, the semantic contents of the masker are intelligible and compete with the target sentence. This is distracting to the focal task and may require extra cognitive resources to grasp the target speech. Further, the changing state of the multi-talker babble would exert extra cognitive and/or executive resources for inhibiting the irrelevant information (see e.g. Sörqvist and Rönnberg 2012). The amplitude envelopes of the target speech are more similar to those of multi-talker babble than those of stationary noise, which may hinder the extraction of the target speech from the background noise (Ezzatian et al. Citation2015). This is independent of whether or not the competing masker is intelligible. Adding these two factors together, listening in a multi-talker babble should be more dependent on cognitive abilities than listening in a stationary noise. This is also what we found in terms of dependence on WMC.

Weakening of WMC–SRT relationship in stationary noise

To investigate whether hearing aid experience has an impact on the WMC–SRT relationship, the participants were divided into three groups based on their hearing aid experience. It is important to note that participants in these three groups of hearing aid experience did not differ in terms of some relevant verbal information processing skills such as general processing speed, lexical access speed and rhyme judgment performance, which is an indication of the quality of phonological representation in the lexicon (Andersson Citation2002).

Looking at the correlation pattern of the overall WMC–SRT relationship with both types of background noise pooled, the correlations were statistically significant for the 2–5 years group and were not significant for the 10 years or above group in both types of noise. This suggests a general trend of weakening of this relationship as users become more experienced with the hearing aid. This is consistent with the results from the previous studies that such relationship declines with increasing hearing aid experience. For example, Rählmann et al. (Citation2018) showed similar results in groups of hearing aid users with different duration of use experience.

Interestingly, further statistical analyses showed that the weakening of this relationship was significant in the stationary noise but not in the multi-talker babble. This corroborates the results from the regression analyses, that WMC predicted SRT in SSN for listeners with relatively less hearing aid experience, and WMC predicted SRT in 4 T for all listeners regardless of their experience with hearing aid. This suggests that relative to users with relatively less experience (2–5 years), the most experienced users (10 years or above) devote less cognitive resources when the background noise is of a stationary type. In other words, the explicit working memory processes resources invoked for speech perceived in stationary noise decreases as hearing aid experience increases. This is in good agreement with the results reported by Rählmann et al. (Citation2018), and is also in line with the results reported by Habicht, Kollmeier, and Neher (Citation2016), who showed that compared to listeners with at least one year of hearing aid experience, listeners with no previous hearing aid experience had a longer response time in an aided speech perception task in stationary noise. Their findings can be interpreted as, under the framework of ELU, a consequence of having a more efficient matching process and a faster speech understanding in an aided listening task.

The observed pattern of our results agrees with the prediction that phonological representations that are congruent with the incoming hearing aid processed speech have been established in the lexicon for the most experienced users in this study. Successful matching and unlocking of lexicon can consequently be achieved after the system has adapted to the altered input signals. Matching appears to be more efficient in these listeners probably because of having more representations that match the incoming processed signal, even though they had elevated hearing thresholds compared to the relatively less experienced users.

To recapitulate, the matching mechanism, or the engagement of explicit processing, is mainly dependent on two factors, (1) the fidelity of the incoming signal, or RAMPHBO information, in the presence of distortion, background noise, or hearing loss and (2) the quality of the phonological representations in semantic LTM. In this study, we compared within and across two types of background noise across listeners with different hearing aid experience, while keeping hearing aid signal processing and hearing loss constant. The results of the current study can be explained by the availability of newly established representations in LTM such that the matching process is facilitated in the stationary noise for hearing aid users with extensive experience. Even though the facilitation of the matching process could have also been explained by having faster lexical access, this is not supported by our current data. In our cohort, there was no statistically significant difference between the hearing aid experience groups in physical matching and lexical decision performance. Therefore, the difference in terms of the matching process between groups was not due to lexical access speed or general processing speed.

Since the quality of the existing phonological representations in LTM, as indicated by the rhyme judgement test, also did not differ in these groups despite age differences, it is reasonable to assume that the original existing phonological representations in the lexicon are preserved. Therefore, we propose that when listeners are exposed to a new hearing aid signal processing, newly established and recalibrated phonological representations would be added or attached to existing representations in the lexicon. This would take place after consistent exposures to distorted and mismatched conditions of listening, and provided that successful speech understanding is achieved after several iterations of mismatch in working memory. In this way, more newly established phonological representations that are congruent to hearing aid processed input signals will become available to unlock the lexicon.

It is important to note that, as in previous studies that compared users with different durations of hearing aid experience (e.g. Rählmann et al. Citation2018), the hearing aid signal processing used in the present study was not the same as the signal processing used in the listeners’ own hearing aids. Thus, it could be that the new long-term representations established in the lexicon based on hearing aid processed speech input are generally similar.

WMC–SRT relationship in a multi-talker babble

The weakening of the WMC–SRT relationship was not statistically significant in the multi-talker babble. The possible reason for not observing a significant change in the WMC–SRT relationship in multi-talker babble is because this type of background shows a stronger masking effect, as discussed above, such that the interfering speech may also invoke explicit processing even when the listener focuses on the target speech. This may not necessarily change as a function of hearing aid experience. Compared to the stationary background noise, which gives purely energetic masking, the linguistic information in the background also masks the target speech which would require more explicit matching and mismatching processes before the lexicon is unlocked. In a continuous discourse in this type of background noise, there are numerous concurrent matching processes going on in working memory. Based on the ELU prediction, because the incoming signal is masked by a varying and less predictable noise with linguistic information, the speech input could not always be matched readily with the stored phonological representations in the lexicon. This applies also to the most experienced users in this study, who presumably have recalibrated phonological representations in LTM that are congruent with the processed input signal. These recalibrated representations may not equally facilitate the matching process as in the stationary noise. The success of the matching process would depend on the actual linguistic/semantic information in the masker, which may vary in different test occasions or daily listening situations. This could explain why relative to the stationary noise condition, this background noise results in more mismatching processes and invokes explicit processing. This could also explain why we did not observe a statistically significant, but a trend of, weakening of such relationship in multi-talker babble. Our results presented here could also be related to the clinical observation that long-term experienced hearing aid users still find listening in noise challenging.

Theoretical implications

The present study adds two new insights to the matching mechanism in the ELU model: (1) long-term hearing aid use is significantly related to less explicit processing in the matching mechanism when background noise is stationary and (2) the hypothesis that the establishment of new and congruent representation takes place when the RAMBPHO input signal consistently mismatches with an existing representation in the lexicon (i.e. a RAMBPHO input representation that is congruent to and can be matched readily with a recalibrated representation in the lexicon). These two new insights together suggest that repeated and consistent exposure to mismatching condition could enhance the matching process by having newly established phonological representations in LTM. Such enhancement of the matching mechanism occurs mainly when listening in the stationary noise.

The current ELU model (Rönnberg et al. Citation2013) does not address the mechanism of establishing new representations based on prolonged exposure to listening conditions that trigger mismatch, such as wearing hearing aids regularly over a certain period of time, that is, how the matching process which takes place in working memory would alter or modify the phonological representations in LTM. The recent Developmental ELU model ( D-ELU; Holmer, Heimann, and Rudner Citation2016; Rudner and Holmer Citation2016 ) addressed the modification of existing representations when a mismatch between input signal and existing linguistic representations occurs. In particular, the D-ELU predicts that when there is a mismatch in the developing language system, the explicit processing loop engages both domain-general representations in semantic LTM and domain-specific representations (e.g. sign-specific phonology, in Holmer, Heimann, and Rudner Citation2016 for deaf and hard-of-hearing signing children) are engaged in the explicit processing of the incoming signal. This process leads to establishment of new representations or a redefinition of stored representations. The results of the current study is in line with and extends the D-ELU model, such that the mismatch process may lead to the establishment of new phonological representations, and that each newly recalibrated phonological representation would be added to and integrated with the corresponding existing semantic representation with an existing phonological representation.

Multiple measures of working memory

The present study measured WMC using three different tests, and a composite score was used to give a robust measure of working memory. All tests measure complex working memory span, consisting of a processing and a storage component. Both reading span and semantic word-pair tests were verbally based. The former one has a processing component which is based on semantic verification judgments of sentence stimuli, and the latter one involved processing of the meaning of the word pairs only with no syntactic information. The VSWM is non-verbally based. The task was chosen such that verbal labels were not applicable to the classification judgements. All working memory tests appeared to have resulted in comparable correlation coefficients with SRT, and the principal component analysis showed that these three tests had very similar factor loadings on the same factor. These results indicate that all three tests thus represented a rather equal weighting in measuring the construct WMC, and that the ability to process and store simultaneously in general, rather than the ability to process and store simultaneously specifically verbal information, predict SRT in noise. Souza and Arehart (Citation2015) used two different reading span tests to measure WMC. One measure was identical to the reading span test used in the present study, but in a different language. The other measure was also a dual-task, which involves semantic judgement task similar to the reading span test, but the to-be-remembered word was presented after semantic judgment task. They found that both WMC measures showed a robust relationship with speech recognition in noise. This is in line with our findings, such that these dual-task type of WMC measures could well predict speech recognition in noise (Rönnberg et al. Citation2013).

Conclusion

Based on the ELU model, the deployment of explicit processing resources depends on the mismatch processes. Listening to distorted or hearing aid processed speech is hypothesised to be a cause of mismatch. This study investigated if long-term hearing aid experience would facilitate the matching process by establishing new phonological representations in the lexicon that successively become congruent to the processed speech input. We found that more explicit processing of working memory is invoked when listening in a multi-talker babble background rather than in a stationary noise. In stationary noise, the WMC–SRT relationship is found to be stronger for relatively less experienced than most experienced hearing aid users, indicating that the matching mechanism is more efficient for long-term users when perceiving speech. The results of this study extend the existing ELU/D-ELU model, such that when the RAMBPHO input signal consistently mismatches with an existing representation in the lexicon for a prolonged period of time, the mismatch processes would lead to an establishment or recalibration of new phonological representations. However, for users with extensive hearing aid experience, despite of the availability of recalibrated phonological representations in LTM, these may not facilitate the matching process while listening in a changing state background noise with interfering speech information. The incoming signal would then be incongruent with the recalibrated phonological representations stored in the lexicon. A mismatch condition would still arise and call for explicit working memory processing resources.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This research was funded by a Linnaeus Centre HEAD excellence center grant (349-2007-8654) from the Swedish Research Council and a program grant from forskningsrådet för hälsa, arbetsliv och socialvetenskap (FORTE) (2012-1693), awarded to the second author.

References

  • Akeroyd, M. A. 2008. “Are Individual Differences in Speech Reception Related to Individual Differences in Cognitive Ability? A Survey of Twenty Experimental Studies with Normal and Hearing-Impaired Adults.” International Journal of Audiology 47 (suppl 2): S53–S71. doi:10.1080/14992020802301142.
  • Andersson, U. 2002. “Deterioration of the Phonological Processing Skills in Adults with an Acquired Severe Hearing Loss.” European Journal of Cognitive Psychology 14 (3): 335–352. doi:10.1080/09541440143000096.
  • Andersson, U., and B. Lyxell. 1998. “Phonological Deterioration in Adults with an Acquired Severe Hearing Impairment.” Scandinavian Audiology 27 (4): 93–100. doi:10.1080/010503998420711.
  • Arehart, K., P. Souza, J. Kates, T. Lunner, and M. S. Pedersen. 2015. “Relationship Between Signal Fidelity, Hearing Loss and Working Memory for Digital Noise Suppression.” Ear and Hearing 36 (5): 505. doi:10.1097/AUD.0000000000000173.
  • Baddeley, A., R. Logie, I. Nimmo-Smith, and N. Brereton. 1985. “Components of Fluent Reading.” Journal of Memory and Language 24 (1): 119. doi:10.1016/0749-596X(85)90019-1.
  • Baddeley, A., and B. Wilson. 1985. “Phonological Coding and Short-Term Memory in Patients without Speech.” Journal of Memory and Language 24 (4): 490. doi:10.1016/0749-596X(85)90041-5.
  • Besser, J., T. Koelewijn, A. A. Zekveld, S. E. Kramer, and J. M. Festen. 2013. “How Linguistic Closure and Verbal Working Memory Relate to Speech Recognition in Noise—A Review.” Trends in Amplification 17 (2): 75–93. doi:10.1177/1084713813495459.
  • Classon, E., M. Rudner, M. Johansson, and J. Rönnberg. 2013. “Early ERP Signature of Hearing Impairment in Visual Rhyme Judgment.” Frontiers in Psychology 4: 241. doi:10.3389/fpsyg.2013.00241.
  • Davis, A. 2003. “Population Study of the Ability to Benefit from Amplification and the Provision of a Hearing Aid in 55–74-Year-Old First-Time Hearing Aid Users.” International Journal of Audiology 42 (suppl 2): 39–52. doi:10.3109/14992020309074643.
  • Desjardins, J. L., and K. A. Doherty. 2013. “Age-Related Changes in Listening Effort for Various Types of Masker Noises.” Ear and Hearing 34 (3): 261–272. doi:10.1097/AUD.0b013e31826d0ba4.
  • Dryden, A., H. A. Allen, H. Henshaw, and A. Heinrich. 2017. “The Association Between Cognitive Performance and Speech-in-Noise Perception for Adult Listeners: A Systematic Literature Review and Meta-Analysis.” Trends in Hearing 21: 1–21. doi:10.1177/2331216517744675.
  • Ezzatian, P., L. Li, K. Pichora-Fuller, and B. A. Schneider. 2015. “Delayed Stream Segregation in Older Adults: More than Just Informational Masking.” Ear and Hearing 36 (4): 482–484. doi:10.1097/AUD.0000000000000139.
  • Foo, C., M. Rudner, J. Rönnberg, and T. Lunner. 2007. “Recognition of Speech in Noise with New Hearing Instrument Compression Release Settings Requires Explicit Cognitive Storage and Processing Capacity.” Journal of the American Academy of Audiology 18 (7): 618–631. doi:10.3766/jaaa.18.7.8.
  • Füllgrabe, C., and S. Rosen. 2016. “On the (un) Importance of Working Memory in Speech-in-Noise Processing for Listeners with Normal Hearing Thresholds.” Frontiers in Psychology 07: 1268. doi:10.3389/fpsyg.2016.01268.
  • Gatehouse, S., and J. Gordon. 1990. “Response Times to Speech Stimuli as Measures of Benefit from Amplification.” British Journal of Audiology 24 (1): 63–68. doi:10.3109/03005369009077843.
  • Gatehouse, S., G. Naylor, and C. Elberling. 2003. “Benefits from Hearing Aids in Relation to the Interaction between the User and the Environment.” International Journal of Audiology 42 (sup1): 77–85. doi:10.3109/14992020309074627.
  • Gordon-Salant, S., and S. S. Cole. 2016. “Effects of Age and Working Memory Capacity on Speech Recognition Performance in Noise among Listeners with Normal Hearing.” Ear and Hearing 37 (5): 593–602. doi:10.1097/AUD.0000000000000316.
  • Grimm, G., T. Herzke, D. Berg, and V. Hohmann. 2006. “The Master Hearing Aid: A PC-Based Platform for Algorithm Development and Evaluation.” Acta Acustica United with Acustica 92 (4): 618–628. doi:10.3813/AAA.918878.
  • Habicht, J., B. Kollmeier, and T. Neher. 2016. “Are Experienced Hearing Aid Users Faster at Grasping the Meaning of a Sentence than Inexperienced Users? An Eye-Tracking Study.” Trends in Hearing 20: 1–13. doi:10.1177/2331216516660966.
  • Hagerman, B. 1982. “Sentences for Testing Speech Intelligibility in Noise.” Scandinavian Audiology 11 (2): 79–87. doi:10.3109/01050398209076203.
  • Hagerman, B., and C. Kinnefors. 1995. “Efficient Adaptive Methods for Measuring Speech Reception Threshold in Quiet and in Noise.” Scandinavian Audiology 24 (1): 71–77. doi:10.3109/01050399509042213.
  • Holmer, E., M. Heimann, and M. Rudner. 2016. “Imitation, Sign Language Skill and the Developmental Ease of Language Understanding (D-ELU) Model.” Frontiers in Psychology 7: 107.
  • Lunner, T. 2003. “Cognitive Function in Relation to Hearing Aid Use.” International Journal of Audiology 42 (sup1): 49–S58. doi:10.3109/14992020309074624.
  • Lunner, T., and E. Sundewall-Thorén. 2007. “Interactions Between Cognition, Compression, and Listening Conditions: Effects on Speech-in-Noise Performance in a Two-Channel Hearing Aid.” Journal of the American Academy of Audiology 18 (7): 604–617. doi:10.3766/jaaa.18.7.7.
  • Lyxell, B. 1994. “Skilled Speechreading: A Single-Case Study.” Scandinavian Journal of Psychology 35 (3): 212–219. doi:10.1111/j.1467-9450.1994.tb00945.x.
  • Mattys, S. L., J. Brooks, and M. Cooke. 2009. “Recognizing Speech under a Processing Load: Dissociating Energetic from Informational Factors.” Cognitive Psychology 59 (3): 203–243. doi:10.1016/j.cogpsych.2009.04.001.
  • Moore, B. C. 2008. “The Choice of Compression Speed in Hearing Aids: Theoretical and Practical Considerations and the Role of Individual Differences.” Trends in Amplification 12 (2): 103–112. doi:10.1177/1084713808317819.
  • Neher, T., T. Behrens, S. Carlile, C. Jin, L. Kragelund, A. S. Petersen, and A. v Schaik. 2009. “Benefit from Spatial Separation of Multiple Talkers in Bilateral Hearing-Aid Users: Effects of Hearing Loss, Age, and Cognition.” International Journal of Audiology 48 (11): 758–774. doi:10.3109/14992020903079332.
  • Neher, T., G. Grimm, V. Hohmann, and B. Kollmeier. 2014. “Do Hearing Loss and Cognitive Function Modulate Benefit from Different Binaural Noise-Reduction Settings?” Ear and Hearing 35 (3): E 52–e62. doi:10.1097/AUD.0000000000000003.
  • Neher, T., K. C. Wagener, and R. L. Fischer. 2018. “Hearing Aid Noise Suppression and Working Memory Function.” International Journal of Audiology 57 (5): 335–344. doi:10.1080/14992027.2017.1423118.
  • Ng, E. H., E. Classon, B. Larsby, S. Arlinger, T. Lunner, M. Rudner, and J. Rönnberg. 2014. “Dynamic Relation Between Working Memory Capacity and Speech Recognition in Noise during the First 6 Months of Hearing Aid Use.” Trends in Hearing 18: 1–10. doi:10.1177/2331216514558688.
  • Petersen, E. B., T. Lunner, M. D. Vestergaard, and E. Sundewall Thorén. 2016. “Danish Reading Span Data from 283 Hearing-Aid Users, Including a Sub-Group Analysis of Their Relationship to Speech-in-Noise Performance.” International Journal of Audiology 55 (4): 254–261. doi:10.3109/14992027.2015.1125533.
  • Picou, E. M., T. A. Ricketts, and B. W. Hornsby. 2013. “How Hearing Aids, Background Noise, and Visual Cues Influence Objective Listening Effort.” Ear and Hearing 34 (5): E52–E64. doi:10.1097/AUD.0b013e31827f0431.
  • Pollack, I. 1975. “Auditory Informational Masking.” The Journal of the Acoustical Society of America 57 (S1): S5–S5. doi:10.1121/1.1995329.
  • Posner, M. I., and R. F. Mitchell. 1967. “Chronometric Analysis of Classification.” Psychological Review 74 (5): 392. doi:10.1037/h0024913.
  • Rudner, M., C. Foo, J. Rönnberg, and T. Lunner. 2009. “Cognition and Aided Speech Recognition in Noise: Specific Role for Cognitive Factors following Nine-Week Experience with Adjusted Compression Settings in Hearing Aids.” Scandinavian Journal of Psychology 50 (5): 405–418. doi:10.1111/j.1467-9450.2009.00745.x.
  • Rudner, M., and E. Holmer. 2016. “Working Memory in Deaf Children Is Explained by the Developmental Ease of Language Understanding (D-ELU) Model.” Frontiers in Psychology 7: 1047. doi:10.3389/fpsyg.2016.01047.
  • Rudner, M., J. Rönnberg, and T. Lunner. 2011. “Working Memory Supports Listening in Noise for Persons with Hearing Impairment.” Journal of the American Academy of Audiology 22 (3): 156–167. doi:10.3766/jaaa.22.3.4.
  • Rählmann, S., M. Meis, M. Schulte, J. Kießling, M. Walger, and H. Meister. 2018. “Assessment of Hearing Aid Algorithms Using a Master Hearing Aid: The Influence of Hearing Aid Experience on the Relationship Between Speech Recognition and Cognitive Capacity.” International Journal of Audiology 57 (suppl 3): S105–S111. doi:10.1080/14992027.2017.1319079
  • Rönnberg, J. 1990. “On the Distinction Between Perception and Cognition.” Scandinavian Journal of Psychology 31 (2): 154–156. doi:10.1111/j.1467-9450.1990.tb00827.x.
  • Rönnberg, J. 2003. “Cognition in the Hearing Impaired and Deaf as a Bridge Between Signal and Dialogue: A Framework and a Model.” International Journal of Audiology 42 (suppl 1): 68–S76. doi:10.3109/14992020309074626.
  • Rönnberg, Jerker, Henrik Danielsson, Mary Rudner, Stig Arlinger, Ola Sternäng, Åke Wahlin, and Lars-Göran Nilsson. 2011. “Hearing Loss Is Negatively Related to Episodic and Semantic Long-Term Memory but Not to Short-Term Memory.” Journal of Speech, Language, and Hearing Research 54 (2): 705–726. doi:10.1044/1092-4388(2010/09–0088).
  • Rönnberg, J., S. Hygge, G. Keidser, and M. Rudner. 2014. “The Effect of Functional Hearing Loss and Age on Long-and Short-Term Visuospatial Memory: Evidence from the UK Biobank Resource.” Frontiers in Aging Neuroscience 6: 326. doi:10.3389/fnagi.2014.00326.
  • Rönnberg, J., T. Lunner, E. H. N. Ng, B. Lidestam, A. A. Zekveld, P. Sörqvist, B. Lyxell, et al. 2016. “Hearing Impairment, Cognition and Speech Understanding: Exploratory Factor Analyses of a Comprehensive Test Battery for a Group of Hearing Aid Users, the n200 Study.” International Journal of Audiology 55 (11): 623–642. doi:10.1080/14992027.2016.1219775.
  • Rönnberg, J., T. Lunner, A. Zekveld, P. Sörqvist, H. Danielsson, B. Lyxell, Ö. Dahlström, et al. 2013. “The Ease of Language Understanding (ELU) Model: Theoretical, Empirical, and Clinical Advances.” Frontiers in Systems Neuroscience 7: 31. doi:10.3389/fnsys.2013.00031.
  • Rönnberg, J., M. Rudner, C. Foo, and T. Lunner. 2008. “Cognition Counts: A Working Memory System for Ease of Language Understanding (ELU).” International Journal of Audiology 47 (suppl 2): S99–S105. doi:10.1080/14992020802301167.
  • Souza, P., and K. Arehart. 2015. “Robust Relationship Between Reading Span and Speech Recognition in Noise.” International Journal of Audiology 54 (10): 705–713. doi:10.3109/14992027.2015.1043062.
  • Souza, P. E., K. H. Arehart, J. Shen, M. Anderson, and J. M. Kates. 2015. “Working Memory and Intelligibility of Hearing-Aid Processed Speech.” Frontiers in Psychology 6: 526. doi:10.3389/fpsyg.2015.00526.
  • Sörqvist, Patrik, and Jerker Rönnberg. 2012. “Episodic Long-Term Memory of Spoken Discourse Masked by Speech: What Is the Role for Working Memory Capacity?” Journal of Speech, Language, and Hearing Research 55 (1): 210–218. doi:10.1044/1092-4388(2011/10–0353).
  • Thiel, C. M., J. Özyurt, W. Nogueira, and S. Puschmann. 2016. “Effects of Age on Long Term Memory for Degraded Speech.” Frontiers in Human Neuroscience 10: 473. doi:10.3389/fnhum.2016.00473.
  • Wagner, A. E., P. Toffanin, and D. Başkent. 2016. “The Timing and Effort of Lexical Access in Natural and Degraded Speech.” Frontiers in Psychology 7: 398. doi:10.3389/fpsyg.2016.00398.