3,429
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Improving phonological skills and reading comprehension in deaf children: A new multisensory approach

ORCID Icon, , , &

ABSTRACT

Purpose

To explore the effectiveness of a multisensory program integrating visual, kinesthetic, and vibrotactile information to train phonological and syntactic reading abilities in prelingually deaf children between 6 and 10 years of age.

Method

We examined whether the multisensory phonological training in combination with syntactic training (MPT+ST) improved phonological and syntactic reading abilities in prelingually deaf children in comparison with a non-multisensory phonological training in combination with ST (nonMPT+ST). Furthermore, we compared phonological recoding abilities (via pseudohomophone effect) of deaf children who received the MPT+ST training with that of their hearing peers. Finally, we investigated whether the effects observed in deaf children after MPT+ST and nonMPT+ST were retained six months.

Results

The MPT+ST improved phonological recoding abilities, both in reading isolated words and in recoding abilities that contributed to improved syntactic processing tasks. After MPT+ST the deaf children’s pseudohomophone effect was similar to that of typical hearing children, but this effect was not retained six months after training.

Conclusion

The phonological route is mediated by multiple sensory systems and MPT+ST contributes to deaf children’s ability to achieve higher reading comprehension by the time they finish primary education; however, sustaining the gains likely requires a longer-term intervention.

Introduction

Learning to read is one of the greatest difficulties that deaf children face in school. While typical hearing children reach a functional level of literacy by fifth grade (around 10 years of age), many deaf children do not reach a functional level of reading comprehension at all during the school years (e.g., Conrad, Citation1979; Kyle, Citation1980; Moreno-Pérez et al., Citation2015; Qi & Mitchell, Citation2012; Torres & Santana, Citation2005; Wauters et al., Citation2006).

A study by the Gallaudet Research Institute found that the mean reading performance of deaf students in the US that were finishing high school was comparable to that of hearing students in third grade. In this study only 5% of deaf twelfth-grade students had a performance equal to that of twelfth graders with typical hearing in a reading comprehension test (Gallaudet Research Institute, Citation2004). Similar results were found in a study of 93 deaf students aged 9 to 20 years, who were in various years of Primary Education and Compulsory Secondary Education across 34 different schools in Spain (Torres & Santana, Citation2005). Among these deaf participants, approximately 38% used oral language as their primary communication system, 25% used bimodal (oral and sign language) communication, 14% used sign language and 1% used cued speech. The authors found that at the end of their primary education (13 years of age), the deaf students’ reading levels were at or below those seen in typical hearing students at the start of grade school (7 years of age).

There is some evidence for benefits from cochlear implants (CI) in the development of oral language production and comprehension in deaf children, especially when they are implanted at an early age (e.g., Scarabello et al., Citation2020; Szagun & Stumper, Citation2012). However, the effects of CIs on reading acquisition are not so clear (e.g., Bell et al., Citation2019; Geers et al., Citation2008; Rezaei et al., Citation2016). For example, Rezaei et al. (Citation2016) studied reading comprehension in deaf Persian children 8 to 9 years of age that received early CI implantation (age 2), in comparison with deaf children that used conventional hearing aids, and typical hearing children. These authors found that hearing children showed greater reading comprehension levels than all deaf children, independent of CI use.

Thus, for the vast majority of deaf children, regardless of system of communication and use of cochlear implants, being able to reach a sufficient level of reading comprehension in the school years is still is one of the hardest tasks they face. This raises the question: what fundamental skills are failing to develop in these children, and why?

The primary hypotheses regarding impaired reading skills involve the development of phonological awareness. Some authors have defended the early hypothesis that individuals who are deprived of audition from an early age cannot access the phonological awareness required to use phonological codes in reading (e.g., Perfetti & Sandak, Citation2000; Shankweiler et al., Citation1979). According to the dual-route reading model (e.g., Coltheart, Citation2005; M. Coltheart et al., Citation2001), phonological recoding abilities play a key role in reading by mediating the phonological – or sub-lexical – route. When reading is mediated by this route, after visual analysis of a word, a conversion from grapheme to phoneme is necessary to create the phonological representation of the word. This phonological recoding allows for comprehensive reading as long as the word’s representation or code has been stored in the phonological lexicon: once the representation is activated, the reader can access its meaning through the semantic system (e.g., V. Coltheart et al., Citation1988).

In languages with a transparent orthography like Spanish, this reading route is especially important when children start learning to read. Each time a child finds a written word for which they already have a phonological representation, with good phonological recoding abilities they can recognize it and access its meaning, in addition to developing an orthographic representation.

Furthermore, the phonological route may be crucial in self-teaching. In this case, when children encounter an unknown word, if they are able to decode it (through phonological recoding), they have the opportunity to acquire orthographic information from it and start progressively developing its orthographic representation (e.g., Share, Citation1995, Citation1999). In the case of deaf children, if auditory deprivation hinders or prevents the development of phonological codes and recoding abilities, this could explain in part why so many of them struggle in learning to read in school, and may never reach reading comprehension levels similar to their hearing peers.

On the other hand, authors have more recently argued that in individuals deprived of hearing from an early age the use of phonological codes during reading is possible, even in deaf children with poor performance in phonological awareness tasks. In fact, there is evidence of phonological recoding during reading in deaf children. For example, a study with deaf Danish children between 6 and 13 years old showed phonological recoding skills in word reading through a visual lexical decision task that resulted in a pseudohomophone effect (Transler & Reitsma, Citation2005). This effect consists of a greater reaction time and/or greater percentage of errors in recognizing pseudowords that are pronounced the same as real word (e.g., in English the pronunciation of the pseudohomophone “bloo” is the same as the word “blue”). This effect has been used as an indicator of the role of phonological awareness in the reading of words. The interpretation is that when the pronunciation of a pseudoword matches with that of a word, it is harder to identify as a pseudoword because the activation of the phonological representation interferes with the identification of mismatch between the orthographic representation and the visual lexicon, and hinders the response. The pseudohomophone effect observed in deaf children, while less than that of their hearing peers, provided evidence that deaf children do utilize a reading route that involves phonological recoding during recognition of written words (Transler & Reitsma, Citation2005).

Similarly, a later study by Daza et al. (Citation2014) examined the pseudohomophone effect in deaf Spanish children between 9 and 16 years of age that used different types of hearing devices (amplifiers vs. cochlear implants), and different types of communication systems (oral language vs. sign language). The deaf children showed a significant pseudohomophone effect, with no difference found between the type of hearing aids or the communication system used in school. More recent studies in which ocular movements were recorded during sentence reading found evidence of phonological recoding in deaf English children between 8 and 9 years of varying backgrounds of parental hearing (Blythe et al., Citation2018), and Chinese adolescents between 13 and 20 years (Yan et al., 2020).

The results of the aforementioned studies suggest that individuals deprived of audition from a very early age can develop phonological knowledge. Therefore, reading in deaf children might be mediated by an alternative phonological route based on phonological representations of words that develop in the absence, or reduction, of auditory input. In these individuals, phonological knowledge could be developed through multimodal aspects of language.

From this perspective, some authors have proposed that phonological representations for deaf individuals do not develop exclusively through hearing stimulation, but rather are based on visual and kinesthetic information obtained from lip reading and orthography, articulation and imitation of labial movements (articulatory feedback), and the dactylology (hand gestures) of sign language (e.g., Alegría et al., Citation1992; Bellugi et al., Citation1975; Haptonstall-Nykaza & Schick, Citation2007; Harris & Moreno, Citation2006; Leybaert, Citation1993). Furthermore, the phonological representations of words might also be enhanced in these individuals through other channels of non-acoustic information. For example, the vibrotactile information from spoken words (e.g., the vibrotactile stimulation pattern from the sound of a spoken word), can convey information about the phonological structure of words (Fletcher et al., Citation2018; Kello & Bernstein, Citation2000). Nor are the sensory modalities for language information limited to speech articulators or oral articulatory feedback: even oral language involves movements of the head, torso and arms which convey relevant information about semantic and emotional content (Hadar et al., Citation1985; Livingstone & Palmer, Citation2016; McClave, Citation2000; Tiede et al., Citation2019). In addition, other aspects of spoken language include information about timing and location of sound, which are conveyed or affected by movements of head and body, such as in head-related transfer function, and planning response execution (Rauschecker, Citation2018; Rauschecker & Tian, Citation2000). In fact, there is some evidence that head and body movements are coordinated to achieve phonetic targets in deaf individuals who use sign language (Tyrone & Mauk, Citation2016).

If deaf children develop phonological knowledge through a multisensory code in which sensorimotor information (e.g., visual, kinesthetic, vibrotactile, and motor) is integrated, this raises the question of whether phonological recoding abilities can be enhanced through a multisensory training. As far as we know, studies are scarce that address the effects of training with multisensory tasks to enhance the phonological recoding abilities in reading (for a review, see, Wang et al., Citation2008).

Furthermore, despite previous literature pointing to deficient syntactic abilities in deaf children (e.g., Alegría et al., Citation2020), there are very few studies that explore the effectiveness of specific interventions to enhance these abilities. Single words do not transmit a message; a competent reader not only recognizes a word but is able to establish its contextual meaning in a sentence. When children are learning to read, syntactic abilities allow them to separate each sentence into its components, classify those components according to grammatical roles, and finally build a structure that enables the extraction of meaning (Hoover & Gough, Citation1990; Tunmer & Hoover, Citation1992).

As has been suggested by other authors (e.g., Cuetos, Citation2008;), learning to read requires phonological abilities that work in tandem with other, higher order processes of reading like syntactic abilities. In deaf children, the combination of a multisensory training for phonological abilities along with training in syntactic abilities could bolster the process of learning to read by the time children finish school.

Taken together, the evidence presented here suggests a need for a phonological training that is appropriate for deaf children, by accessing the available sensory modalities, and making use of sensorimotor integration abilities. We designed a program to train phonological and syntactic abilities in order to enhance the development of reading comprehension for deaf children in school: the Multisensory Brain program. This program can be used with deaf children between 6 and 12 years old, independent of preferred communication system (oral or sign language) and of device (CI or hearing aid) use.

One of the novel characteristics of the Multisensory Brain program is that the training of phonological abilities is made in the absence of auditory information. Therefore, although the phonological tasks require the use of and association with the words’ phonology and orthography, the information that is used is visual, kinesthetic, and vibrotactile in nature. In addition, the Multisensory Brain program includes a set of tasks which aims to train the basic operations of syntactic processing that allow for the extraction of meaning in written sentences.

The main goal of the present study was to measure the effectiveness of the Multisensory Brain program to enhance phonological coding in the reading of words, and improve syntactic processing in the comprehension of written sentences, in prelingually deaf children between 6 and 10 years old.

First, we explored whether the multisensory phonological training (MPT) in combination with syntactic training (MPT+ST) improved deaf children’s phonological recoding in word reading, and their ability to segment words into syllables, in comparison with a non-multisensory phonological training in combination with ST (nonMPT+ST). During a 6-month period, one group of deaf children (experimental group) completed the MPT+ST with the Multisensory Brain program, while a second deaf group did the nonMPT+ST with tasks that only required the manipulation of visual information. Both groups were evaluated before and immediately after the training with a visual lexical decision task that measures pseudohomophone effect, a written word segmentation task and a reading comprehension task of sentences with different grammatical structure.

If the MPT+ST were more effective than the nonMPT+ST, we expected that after training the deaf children of the experimental group (MPT+ST) would show a greater pseudohomophone effect, and higher scores in the written word segmentation task, as compared with the deaf control group (nonMPT+ST). Additionally, since in both groups of deaf children the phonological training (multisensory for the experimental group and non-multisensory for the control group) was carried out together with a syntactic training (ST), we expected both groups to obtain better scores in the sentence tasks post-training. Furthermore, we predicted that the group that worked with the ST combined with MPT would benefit most from the syntactic training, obtaining higher post-training scores in the sentence comprehension tasks than the group that received ST with nonPMT.

Additionally, we explored whether the pseudohomophone effect that deaf children obtained immediately after MPT+ST was comparable to the one shown by their typical hearing peers without reading problems, who had developed phonological representations of a fundamentally auditory nature and who did not need phonological skill training. If in deaf children the phonological training with multisensory tasks in combination with syntactic training (MPT+ST group) was able to develop or enhance multisensory phonological representations similarly to the phonological representations of typical hearing children, we expected that after the MPT+ST deaf children would show a pseudohomophone effect similar to that of hearing peers.

Finally, in a follow-up study, we investigated whether the effects observed immediately after training in both groups of deaf children were retained six months after the training.

Method

Participants and procedure

Forty prelingually deaf children (23 boys and 17 girls) between 6 and 10 years of age (mean age 8.5 ± 1.9 years), and 28 hearing children (10 boys and 18 girls) between 6 and 10 years (mean age 7.8 ± 1.5 years 7.8), without neurological or psychiatric history, participated in this study. All participants were recruited from public schools and associations for deaf children in Spain. Parents or legal tutors of participants provided a written informed consent prior to the beginning of the study.

All deaf children were evaluated individually in the same facilities from the school centers or associations where they were recruited from. During the evaluation session, each child was administered a nonverbal intelligence test, a computerized lexical visual decision task that measures the pseudohomophone effect, a task to measure the ability to segment words, and a task of reading comprehension with sentences of different grammatical structure. The session lasted approximately 50 minutes, including time for instructions and breaks between tasks. For those children that used Spanish Sign Language (SSL) along with oral language as their preferred communication system the instructions were given with support of SSL interpreters.

After completing the pre-training evaluation phase, the deaf children were assigned to either experimental (MPT+ST) or control (nonMPT+ST) group, which were matched for age, gender, cochlear implant use and pseudohomophone effect in the lexical visual decision task. A total of 19 children (10 males; mean age: 8.4; SD: 1.1) were assigned to the experimental group, and 21 (13 males; mean age: 8.6; SD: 1.3) to the control group. shows the main sociodemographic and clinical characteristics of the deaf children in the experimental and control groups. No significant differences were observed between both groups of deaf children in age, sex, communication system used in school context, type of education center and age in which they were enrolled in school. Nor were significant differences found on the relevant clinical variables for this population (degree of hearing loss, use of CI, age of implantation, duration of CI use, and non-verbal IQ). Therefore, both groups of deaf children were equivalent in the main sociodemographic and clinical variables.

Table 1. Sociodemographic and clinical characteristics of deaf children assigned to the experimental group (MPT+ST) and to the control group (MPT+ST).

The hearing children group was administered the non-verbal intelligence test and the computerized lexical visual decision task that measures the pseudohomophone effect at the beginning of the school year (Testing 1) and after six months (Testing 2). We evaluated this hearing children group in order to investigate whether the pseudohomophone effect that deaf children obtain after MPT+ ST was comparable to that shown by their typical hearing peers. Although the mean IQ of hearing children was somewhat higher (113.7) than that of deaf children (100.3), mean IQ of both groups were within the normal range, and both groups were equivalent on the sociodemographic variables of age, sex, or school year (see, ).

Table 2. Sociodemographic characteristics and IQ scores in the non-verbal intelligence test of deaf children who received MPT+ST training and hearing children.

The training program for deaf children was implemented over 6 months with a weekly session of 35 minutes duration. The 22 training sessions were completed in small groups of 4 to 9 children in the same facilities that they were recruited from and with the support of 1 or 2 researchers during training sessions. In each group, the children worked individually with a laptop. Each session was protocolized so that all children completed the training in similar conditions. To achieve this, each child was assigned a control/follow-up notebook with the tasks used for each session and their order. The total number of tasks for each child was determined as a function of the child’s age (children between 6 and 7 years-old or children between 8 and 10 years-old), however the duration of training sessions was the same for the younger (6–7 years old) and older children (8–10 years old). During the first 9 sessions, only phonological training tasks were administered: for the experimental group children multisensory tasks of the Multisensory Brain program (MPT) were used, and for the control group, non-multisensory tasks equivalent to the ones used in the experimental group were used, with only static visual information. Beginning with the tenth session (and until session 22), in addition to the phonological tasks, syntactic processing tasks were used in the experimental group (MPT+ST) and the control group (nonMPT+ST).

Once the training phase was completed, and within one week from the end of the training sessions, all deaf children were evaluated again. All tasks used in the procedure of the post-training evaluation were the same as in the pre-training evaluation.

Materials

Assessment tasks

Non-verbal intelligence test TONI-2 (Brown, Sherbenou, & Johnsen, Citation2000)

This test consists of a standardized psychometric test in which the measure of intelligence is obtained through the ability of children to solve problems with abstract figures. Thus, it is considered a test devoid of language influence, and can be applied to children 5 years of age and older. To be able to respond in this test there is no need of speech, reading, or writing, only a minimal motor response is needed. The elements that form the test are represented in a notebook with a total of 55 test items and 6 examples for training. The instructions can be given through gestures or brief verbal cues (e.g., “Which of these pictures should go here?”). The children give their answer by pointing to their chosen solution in the notebook. In the figures that appear for each item, the children must take into account characteristics related to: the shape, position, contiguity, shading, size, length, movement, and details. In the most difficult items more than one of these characteristics intervene. The difficulty progressively increases, and more number and kinds of rules have to be applied to find the solution. In each item, the children have to observe the similarities, differences, and changes that exist between the figures of the problem; and after to analyze the figures that are proposed as solutions to the problem. They have to be able to identify the rule or rules that are operating in the problem-figures and choose the correct response following those rules. The starting item changes as a function of age: for example, for the younger children (6 to 7 years old) the starting item is the first one, for 8 to 10 years old is the item 4 and for 10 years old it is item 8. The test ends when the children commit five consecutive errors. For each child, a direct score (DS) is calculated which corresponds to the number of correct responses before reaching ceiling. To compare the results of each child with their normative group the SD can be transformed into an intelligence quotient (IQ, with mean = 100; Standard Deviation = 15; normative range = 85 −115). The reliability indices of this test range between 0.83 and 0.92 (Brown et al., Citation2000).

Computerized visual lexical decision task

This task was designed with E-Prime v.2 software (Psychology Software Tool, Pittsburgh, PA). In each trial, a verbal stimulus is presented in the center of the screen and the child is asked to indicate whether that stimulus was a real word, pressing the corresponding key in the keyboard. In each trial, the sequence of events is as follows. First, a central fixation point (+) appears for 500 ms. Then the target stimulus is presented, which stays on the screen until the subject responds. The task consists of two blocks of trials: a practice block with 6 trials, which serves to ensure that the child understands the instructions, and an experimental block with 56 trials. Presentation of real and pseudo words is random and set at 50/50. In the 28 trials with real words, the words can vary in frequency of use. Specifically, according to the dictionary of frequencies of Pérez et al. (Citation2003), we used 7 low-frequency words, 7 high-frequency words, and 14 medium-frequency words. In the remaining 28 trials the target stimulus could be a pseudo-word (14) or a pseudohomophone (14). The pseudohomophones and pseudo-words were constructed from the high- and low-frequency words modifying two or three letters (e.g., the pseudo-word “molejio” from “colegio,” which means “school”). This task allows us to obtain a measure of the pseudohomophone effect with the difference between the mean percentage of errors in pseudohomophones minus the mean percentage of errors with pseudowords. A higher score reflects a larger pseudohomophone effect, from which we can infer a greater involvement of phonological recoding in word reading.

Word segmentation task

This task is a computerized version of the “Equal-different” subtest from the Reading Processes Evaluation Battery (PROLEC-R; Cuetos et al., Citation2007). The task allows the experimenter to determine whether children are able to segment and identify the letters that form each word that they read, or whether they instead use logographic reading – that is, reading that is fundamentally mediated by the direct or lexical route. In each trial, a couple of words or pseudowords are presented at the center of the screen, and the child must indicate whether the two elements are the same or different by pressing a corresponding key in the keyboard. There were a total of 20 trials, half of which had the two elements be equal: the two words presented are the same (e.g., “mercado-mercado,” meaning “market”) or equal pseudowords (e.g “calzapo-calzapo”). In the other half of the trials the two elements of the pair are different: two words differ by one letter (e.g., “anguila-angula”), or two pseudowords differ by one letter (e.g., “bequefo-biquefo”). The task measures correct responses and reaction times, which are used to obtain a score using the following function: (number of correct responses/total time in seconds) × 100.

Reading comprehension with grammatical structures

This task is a computerized version of the “Grammatical structures” subtest of the PROLEC-R battery, which measures the capacity to achieve the syntactic processing needed to extract the meaning of sentences with different grammatical structure. The task consists of 16 trials, each one containing a written sentence at the center of the screen with four pictures in each corner, only one corresponding to the written sentence while the other three being distractors. The task requires the child to read the sentence and choose the picture matching the sentence. All sentences are reversible, meaning that the subject and object of the sentence can be switched. Four types of sentences are used, each one with four stimuli: active, passive, focused object or subordinate relative. The score is calculated by adding the total number of correct responses.

Training program

Multisensory Phonological Training (MPT)

To train phonological abilities, an experimental group used the tasks of the Multisensory Brain program: the 12 LectoLip tasks and the 18 Multisensory Phonological Encoding (MPE) tasks.

The LectoLip tasks require that the children associate the lip movements from the oral production of words to the images that represent and their written form (graphemes). With this task, we aim to train children’s grapheme–phoneme conversion skills and enhance the phonological representations needed for reading through the phonological or sub-lexical route. In all these tasks, access to the phonological structure of words is derived from multisensory information: (1) visual static information (from the orthography and visual images that represent words); (2) visual dynamic information (seeing lip movements during oral production of words); (3) and kinesthetic information (imitating lip movements). All the LectoLip tasks are matching tasks following a uniform structure: a target stimulus appears in the upper half of the screen (e.g., a video in which the lower part of a face is shown pronouncing a word; an image; a syllable or a written word), and in the lower half of the screen three stimuli appear as response alternatives

The MPE tasks were also aimed to stimulate the development of phonological representations through multisensory information. They were designed for the children to be able to associate multisensory information from the phonological structure of words with the images that represent them and their written form (graphemes). In addition, the MPE tasks involve the children operating directly over these phonological representations since these tasks also require them to: (1) segment the words into syllables, (2) identify the correct order of syllables in a word, (3) build words from isolated syllables and letters; and (4) keep the words in short-term memory. With these tasks, in addition to the visual and kinesthetic information, vibrotactile information from the oral production of words was also used. For the conversion of sound to vibrotactile stimulation we used Basslet, a little wristwatch size bracelet that is worn on the wrist and was developed by the German company Lofelt (https://lofelt.com/about).

Non multisensory phonological training (nonMPT) for control group

In the control group, phonological skill training was implemented through 16 tasks. These tasks aimed to stimulate the development of phonological representations, requiring the children to (1) segment words into syllables, (2) identify the correct order of syllables in a word, (3) build words from isolated syllables and letters; and (4) keep words in short-term memory. These four requirements applied to both multisensory and non-multisensory tasks. The non-multisensory tasks, however, presented only visual information, without vibrotactile or kinesthetic information that was included in the multisensory tasks used with the experimental group. Despite not using phonemes these tasks can be used to train phonetic understanding through orthographic representation: the main focus is training phonological awareness through the identification and manipulation of syllables, which are largest phonetic units within a word.

Syntactic Training (ST)

For the syntactic processing training, in both the experimental and control group, the second block of tasks from the Multisensory Brain program was used. The objective of these tasks was to train the basic operations of syntactic processing that allows the child to extract the meaning of written sentences: (1) categorization of tags according to the different classification of words that form a sentence (nominal phrase, verb, subordinate sentence); (2) identification of existing relationships between those components; and (3) building of the correct structure through the hierarchical reorganization of those components. Specifically, this block is made of 13 syntactic awareness tasks, of progressive difficulty and only with visual information used. The first five tasks require the child to identify and select words or groups of words that work as subjects, objects, or verbs in the sentence, so that the message expressed matched with the one presented in images alongside it (pictures, photos of animated gifs). The next seven tasks require the child to: (1) order the components of a written sentence; (2) detect grammatical errors; and (3) identify the word that completes the sentence coherent from a syntactic and semantic point of view; (4) and match written sentences with different syntactic complexity with the corresponding image. The last two matching tasks require matching a video of a message shown in lip reading or Spanish sign language with the corresponding matching written sentence.

Results

First, we used Mann-Whitney U tests in order to explore whether in the pre-training evaluation both groups of deaf children (MPT+ST vs nonMPT+ST) were equivalent in the pseudohomophony effect obtained with visual lexical decision task and in the scores obtained with word segmentation and the reading comprehension tasks. As shown in , both groups showed equivalent performance on these tasks in the pre-training evaluation.

Table 3. Results obtained in the pre-training evaluation by deaf children assigned to the experimental group (MPT+ST) and deaf children assigned to the control group (nonMPT+ST).

The pre-post-training improvements observed in both groups with each of the three tasks used in the present study are described below. For those analyses, we use the Wilcoxon signed-rank test.

Pre-post-training improvements in the visual lexical decision task (pseudohomophony effect)

In the experimental group (MPT+ST), we found that the post-training pseudohomophone effect (mean = 15.8, SD = 19.8) was greater than that observed in the pre-training evaluation (mean = 6.1, SD = 18.6), and that this pre-post improvement was statistically significant (Z = −3.823; p < .001) and associated with a large effect size (r = .88; see, ).

Table 4. Pre-Post-training improvements in the group of deaf children who received MPT+ST (experimental group).

However, the control group (nonMPT+ST) showed no pre-post improvement in this task (see, ), since no significant difference in the pseudohomophone effect before (mean = 9.2, SD = 16.0) and after (mean = 3.7, SD = 19.9) training was found (Z = −1.328; p = .184).

Table 5. Pre-Post-training improvements in the group of deaf children who received nonMPT+ST (control group).

Additionally, in order to compare the pseudohomophone effects in the group of deaf children who received MPT+ST with these obtained by their typical hearing peers without reading problems, we performed additional analyses using the Mann-Whitney U tests.

First, we compared the pseudohomophone effects obtained by the MPT+ST group in the pre-training evaluation with that obtained by the hearing group at Testing 1. As we expected, we found that hearing children had a much higher pseudohomophone effect (mean = 38.9, SD = 37.3) than deaf children (mean = 6.1, SD = 18.6), and this difference was statistically significant (U = 127.5; p = .003). Furthermore, we observed that before MPT+ST training only 47.4% of deaf children had a positive pseudohomophone effect, while 82.1% of hearing children had a positive pseudohomophone effect at Testing 1, a difference that was significant (χ2 = 6.299, p = .012).

However, when we compared the pseudohomophone effects obtained by the MPT+ST group in the post-training evaluation (mean = 15.8, SD = 19.8) with that obtained by the hearing group at Testing 2 (mean = 30.1, SD = 32.4), observed that this difference was no longer statistically significant (U = 220; p = .312). Furthermore, we observed that in the post-training evaluation the percentage of deaf children that obtained a positive pseudohomophone effect (73.7%) was not significantly different (χ2 = .184, p = .668) from the percentage of hearing children that showed a positive pseudohomophone effect at Testing 2 (68%).

Pre-post-training improvements in the word segmentation task

As showed as , the MPT+ST group significantly improved from pre- to post-training on the written word segmentation task, since we found that the post-training score (mean = 23.74, SD = 9.15) was significantly greater than the pre-training score (mean = 21.84, SD = 12.59; Z = −1.965; p = .049), with a medium effect size (r = .45), which suggests a larger role of the phonological route in reading.

However, as showed as , there were no significant differences found in the nonMPT+ST group between pre- (mean = 24.62, SD = 17.37) and post-training scores (mean = 21.95, SD = 13.94; Z = −0.593; p = .553).

The scales of the word segmentation task provide cutoff points to diagnose difficulties in reading. Thus, the direct score obtained for each child through the number of correct responses and response times used (general index), can be classified in the following normative categories: N (indicating the score is within the normal range); D (mild difficulty); or DD (severe difficulty). In the pre-training evaluation, a high percentage of deaf children in both groups obtained D or DD in this test: 42% in the experimental group, 52% in the control group, and this difference was not statistically significant (χ2 = 0.422; p = .739). In the post-training evaluation, a much lower percentage (10%) of children in the experimental (MPT+ST) group presented D or DD than in the pre-training evaluation (42%), this difference was not significant, but close to alpha = .05 (χ2 = 04.886; p = .065). In contrast, the percentage of children in the control (nonMPT+ST) group presenting D or DD in written word segmentation post-training was only slightly lower (42%) than pre-training (52%), a difference that was not statistically significant (χ2 = .38; p = .76).

Pre-post-training improvements in the reading comprehension task

As showed as , both groups of deaf children showed pre-post improvements in this task.

Comparing the pre- and post-training scores (with the Wilcoxon signed-rank tests), we found that post-training score obtained by the MPT+ST group (mean = 8.63, SD = 2.94) was significantly greater than the pre-training score (mean = 6.26, SD = 2.66; Z = −2.709; p = .007). Likewise, in the nonMPT+ST group we also found that the post-training score (mean = 7.65, SD = 1.87) was significantly greater than the pre-training score (mean = 6.50, SD = 1.23; Z = −2.314; p = .021). Medium effect sizes were obtained for both groups, with a slightly higher value obtained for the MPT+ST group (r = .62) than the nonMPT+ST group (r = .50). This result suggests that the combination of ST with the multisensory phonological training (MPT + ST) could be more effective at improving the sentence reading comprehension in deaf children than when ST is combined with non-multisensory phonological training (nonMPT + ST).

Follow-up study

With this follow-up study we wanted to investigate the long-term effects of the MPT+ST and nonMPT+ST trainings. Specifically, we examined whether the pre-post improvement in the pseudohomophone effect observed in the MPT+ST, and the pre-post improvements on the reading comprehension task observed in both groups were still present six months after the training. In addition, we wanted to determine whether the reading comprehension level at the start of the new school year in those children who received MPT+ST was higher than those children who received nonMPT + ST. To this end, we conducted a follow-up evaluation using a reading comprehension standardized test that allowed us to compare each group of deaf children against the norms.

Method

Participants

This follow-up study included 16 of the 19 deaf children that were part of the MPT+ST group, and 20 of the 21 deaf children from nonMPT+ST group (3 children from MPT+ST group and one from nonMPT+ST group were lost due to attrition).

Materials

All children were administered the lexical visual decision task and the reading comprehension task used in the pre- and post-training evaluations. Additionally, each child was administered one of the following reading comprehension standardized tests: the “Reading Test Level 2” (for the second year grade school; De la Cruz, Citation2002); or the “Reading Comprehension Evaluation Level 2, ECL-2” test (for children between the third and sixth years of primary school; De la Cruz, Citation2005).

Reading test level 2 (De la Cruz, Citation2002)

This standardized test provides a reading comprehension measure in children in primary education. The test includes two levels, level 2 being the appropriate one for children in second year of primary. This level consists of three parts, but in the current study only the second part was used, since it only uses visual information and was therefore suitable for deaf children. The children are required to do two reading tasks, the first of which consists of completing a written sentence, and choosing the correct response from among four alternative words (e. g. “A hat is worn on … ” /the jacket/the head/the dog/the air/). In the second task, the child must read a sentence or short text and then respond to a series of questions (with four alternative responses). The first task includes 25 items and the child has a time limit of 15 minutes to complete it. The second task consists of seven sentences and short texts, and the child must answer a total of 25 questions in 15 minutes. With this task, a maximum direct score of 50 can be obtained, which can be transformed into centile scores to compare with a standard sample. However, the test also estimates the minimum level required to effectively follow the second year primary curriculum. Specifically, it estimates that at the minimum reading level required to follow the second year curriculum, children should be able to answer at least 40% of items correctly at the start of the school year, which means obtaining a minimum direct score of 20 to demonstrate reading proficiency. The reliability index of this test ranges between 0.78 and 0.93 (De la Cruz, Citation2002).

Reading comprehension evaluation. level 2 -ECL-2 (De la Cruz, Citation2005)

ECL-2 is a standardized test for children between the third and sixth year of primary, and measures the ability to understand the meaning of ordinarily used written texts and analyze simple features of texts. The test evaluates knowledge of synonyms, antonyms, and word and sentence meaning in literal and figurative senses. It includes five texts with a total of 27 questions which must be answered in 30 minutes. A maximum direct score of 27 can be obtained, which can be transformed into centile scores to compare against a standard sample. Cronbach’s alpha reliability of ECL-2 was 0.79 (De la Cruz, Citation2005).

Procedure

Once six months elapsed from the end of the MPT+ST and nonMPT+ST training, all deaf children were evaluated again. The evaluation sessions in this follow-up study were completed in the same way as the pre- and post-training evaluations.

Results

Pseudohomophone effect in the MTP+ST group 6 months after

As described before, the group of deaf children who received the MPT+ST training showed a significant pre-post improvement in the pseudohomophone effect obtained with the visual lexical decision task. As showed as , the MPT+ST group showed a post-training pseudohomophone effect (mean = 15.8, SD = 19.8) significantly greater than the pre-training pseudohomophone effect (mean = 6.1, SD = 18.6). However, six months after training this effect was not retained, since the pseudohomophone effect found in the follow-up evaluation (mean = 5.4, SD = 17.1) was significantly smaller than the pseudohomophone effect obtained immediately after training (mean = 15.8, SD = 19.8; Z = −3.516; p < .001).

Scores in the reading comprehension task in both groups (MPT+ST and nonMPT+ST) 6 months after the training

As described before, both groups of deaf children showed pre-post improvements in the reading comprehension task (see, ), with a slightly higher value obtained for the MPT+ST group (r = .62) than the nonMPT+ST group (r = .50). However, this improvement only was retained in the MPT+ST group. The nonMPT+ST group in the follow-up evaluation scored significantly lower (mean = 6.65, SD = 1.95) than it did immediately after training (mean = 7.65, SD = 1.87; Z = −2.066; p = .039). In contrast, the MPT+ST group maintained a score (mean = 8.56, SD = 2.70) similar to that obtained immediately after training (mean = 8.63, SD = 2.94; Z = −.914; p = .361).

Furthermore, comparing the follow-up scores obtained by both groups, we found that the score of the MPT+ST group (mean = 8.56, SD = 2.70) was statistically greater than that of the nonMPT+ST group (mean = 6.65, SD = 1.95; U = 94.5; p = .036).

Reading comprehension evaluated through standardized tests

On the standardized tests of reading comprehension, the deaf MPT+ST group obtained a better mean percentile score (21.1) than the nonMPT+ST group (13.8). This result indicates that deaf children who received multisensory training at the start of primary education (MPT+ST group) began their second year with the minimum level of reading comprehension required to follow the curriculum of that year effectively. As noted previously, the minimum direct score of 20 was required to show reading proficiency, according to the authors of the standardized reading test (De la Cruz, Citation2002). It is noteworthy that, while the deaf MPT+ST children obtained a mean score of 24, the score of the deaf nonMPT+ST children (15) did not reach the minimum-level reading required in hearing children for their school performance to be satisfactory throughout the school year.

General discussion

The results of the present study show that in deaf children between 6 and 10 years of age, a novel multisensory training program integrating visual, kinesthetic, and vibrotactile information was effective at improving phonological recoding abilities in reading isolated words, as well as the syntactic processing abilities that are necessary for reading comprehension.

The results on the pseudohomophone effect in the visual lexical decision task showed a significant improvement from pre- to post-training in the MPT+ST group, and no significant difference in the nonMPT group (). The pseudohomophone effect is considered one of the strongest indicators of phonological processing in visual word recognition (Briesemeister et al., Citation2009). Furthermore, phonological recoding in word reading may be critical to self-teaching, and in the development of the orthographic lexicon (Kyte & Johnson, Citation2006).

Our results also showed that the MPT+ST group significantly improved from pre- to post-training in the word segmentation task, which was not the case for the nonMPT group (). These results are consistent with those of previous studies showing that in deaf children reading can be mediated by the phonological route (e.g., Blythe et al., Citation2018; Daza et al., Citation2014; Transler & Reitsma, Citation2005).

This type of evidence has been interpreted in favor of the hypothesis that in children who are deprived of, or have very limited access to auditory information from an early age, phonological representations of words do not develop exclusively from auditory stimulation, but are also based in visual information from lip reading and orthography, in kinesthetic information from articulation or the imitation of lip movements (articulatory feedback), and from dactylology in sign language. However, to our knowledge, there have been no previous reports of improving phonological representations through a multisensory training integrating visual, kinesthetic and vibrotactile information. The improvement in the pseudohomophone effect seen in the experimental group in contrast with the control group is consistent with the idea that the multisensory integration approach is an effective way of activating the phonological route for deaf children.

Furthermore, our results showed that before the training, the scores of deaf MPT+ST children were significantly lower than those of their age-matched hearing controls, while after training, the pseudohomophone effect shown by the deaf MPT+ST children was similar to that of hearing children at Testing 2. This suggests that, in deaf children, the multisensory training enhanced the use of a “multisensory phonological route” through which their coding abilities improved, reaching a level similar to that of their hearing peers without reading problems.

Our results also suggest that the multisensory training of phonological recoding can bolster syntactic processing. Both MPT+ST and nonMPT+ST groups received syntactic training once each child reached 9 sessions of phonological training. Both groups began with low scores in written sentence comprehension (), and while they showed significant improvement through their respective trainings, the MPT+ST group’s effect size was somewhat greater (). This suggests that when children improve their phonological recoding and can use the sublexical route in word reading, the improved lexical access results in syntactic-level processes requiring less cognitive resources. In line with this, authors such as Cuetos (Citation2008) have suggested that when learning to read, children acquire basic and higher-level processes simultaneously. Training even the most basic processes, such as lexical access, could contribute to the improvement of higher-level processes, such as syntax.

The results of follow-up study shed some light on the long-term effects of the multisensory training on phonological recoding, showing that 6-months after the training ended, the pseudohomophone effect diminished considerably. Although the proportion of deaf children who still showed a pseudohomophone effect was greater in the MPT+ST group (56%) than in the nonMPT+ST group (40%), this difference was not significant.

The reduction in the pseudohomophone effect over time indicates that the children’s reading was no longer being mediated by the phonological, or sublexical, route, suggesting that a sustained effect might require additional or longer-term training. In the short-term, the multisensory training proved effective for utilization of phonological representations and word segmentation ability, resulting in greater abilities for phonological recoding and the mediation of reading by the phonological or sub-lexical route. As described earlier, the ability to utilize the phonological route is important especially when beginning to learn to read, because it helps to develop orthographic representations, which in turn helps to strengthen the direct route and improve reading ability. This idea is consistent with findings that Spanish deaf adult “good readers” do not utilize the phonological route for reading (Fariña et al., Citation2017).

In transparent languages like Spanish, in order to develop orthographic representations of words, it is helpful to first read words through the phonological route, which is precisely what the multisensory training aims to do. When the phonological route is used as a first step in learning to read, then with increased reading experience, a greater level of reading comprehension can ultimately be attained. This might explain why the children in the MPT+ST group, despite their reduced pseudohomophone effect in the follow-up evaluation (which is likely due to the fact that their reading was no longer mediated by the phonological route), showed significantly greater scores than the nonMPT+ST group in the sentence comprehension task as well as a better percentile score on the standardized test of reading comprehension. This difference is notable especially in the deaf children of the MPT+ST group who were beginning their second year of primary education, and who finally showed scores on the standardized tests of reading comprehension that were appropriate for their school year.

Thus, our results support the importance of promoting the phonological route when deaf children are beginning to learn to read. With this approach, children can develop the direct or orthographic route, which will provide them greater reading speed and will enable them to achieve higher reading comprehension levels by the end of the school term. As children gain reading experience, both routes become complementary, and can be used to different degrees depending on the demands of the reading task.

The results observed with the hearing children are also consistent with this idea. The pseudohomophone effect observed in hearing children at Testing 2 (30.1) was significantly lower than that observed at Testing 1 (38.9). During the 6 months that elapsed between testing sessions, these hearing children did not receive any phonological training, but their experiences with reading could have contributed to the development of the direct route, which might explain the reduction observed in the pseudohomophone effect.

Taken together, these studies provide the first comprehensive set of results to show that multisensory (visual, kinesthetic, and vibrotactile) training is beneficial during initial phases of learning to read, to help children begin to use the phonological route and develop orthographic representations, and that in the long term, this training could contribute to deaf children’s ability to obtain higher reading comprehension by the time they complete their primary education. We have made access to our training program freely available through a website (http://www2.ual.es/multisensory-brain/es).

Future studies might explore whether a longer duration, or periodic extensions, of the multisensory training, could lead to long-term effects of the training, perhaps especially in children beginning primary education. In addition, future studies might examine the possibility that training phonological recoding through the integration of visual, kinesthetic, and vibrotactile information could be helpful in other cases of reading impairments where auditory information may not be sufficiently processed, such as in dyslexia or even in patients with brain injury resulting in impaired reading or aphasia.

Ethics approval

This study was conducted in accordance with the principles of the Declaration of Helsinki and was approved by the Research Ethics Committee at University of Almería (Spain). Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin.

Acknowledgments

The authors thank the Rosa Relaño (Almería, Spain) and APANDA (Cartagena, Spain) and ASPANPAL (Murcia, Spain) associations for their kind collaboration in this research.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research was supported by grant Spanish Ministry of Economy and Competitiveness - European Regional Development Fund (ERDF) PSI2016-79437-R, to the first author.

References

  • Alegría, J., Leybaert, J., Charlier, B., & Hage, C. (1992). On the origin of phonological representations in the deaf: Hearing lips and hands. In J. Alegría, D. Holender, J. Morais, & M. Radeau (Eds.), Analytic approaches to human cognition, 119-144. Elsevier.
  • Alegría, J., Carrillo, M. S., Rueda, M. I., & Domínguez-Gutiérrez, A. B. (2020). Lectura de oraciones en español: Similitudes y diferencias interesantes entre niños con dislexia y niños con sordera. Anales de Psicología/Annals of Psychology, 36(2), 295–303. https://doi.org/10.6018/analesps.396841
  • Bell, N., Angwin, A. J., Wilson, W. J., & Arnott, W. L. (2019). Reading development in children with cochlear implants who communicate via spoken language: A psycholinguistic investigation. Journal of Speech, Language, and Hearing Research, 62(2), 456–469. https://doi.org/10.1044/2018_JSLHR-H-17-0469
  • Bellugi, U., Klima, E., & Siple, P. (1975). Remembering in signs. Cognition, 3(2), 93–125. https://doi.org/10.1016/0010-0277(74)90015-8
  • Blythe, H. I., Dickins, J. H., Kennedy, C. R., & Liversedge, S. P. (2018). Phonological processing during silent reading in teenagers who are deaf/hard of hearing: An eye movement investigation. Developmental Science, 21(5), e12643. https://doi.org/10.1111/desc.12643
  • Briesemeister, B. B., Hofmann, M. J., Tamm, S., Kuchinke, L., Braun, M., & Jacobs, A. M. (2009). The pseudohomophone effect: Evidence for an orthography–phonologyconflict. Neuroscience Letters, 455(2), 124–128. https://doi.org/10.1016/j.neulet.2009.03.010
  • Brown, L., Sherbenou, R. J., & Johnsen, S. K. (2000). TONI2. Test de Inteligencia No Verbal. Apreciaci’ón de la habilidad cognitiva sin influencia del lenguaje. TEA.
  • Coltheart, V., Laxon, V., Rickard, M., & Elton, C. (1988). Phonological recoding in reading for meaning by adults and children. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14 (3), 387–397. https://doi.org/10.1037/0278-7393.14.3.387
  • Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108(1), 204–256. https://doi.org/10.1037/0033-295x.108.1.204
  • Coltheart, M. (2005). Modelling reading: The dual-route approach. In M. J. Snowling & C. Hulme (Eds.), The Science of Reading: A handbook (pp. 6–23). Blackwell.
  • Conrad, R. (1979). The Deaf Schoolchild. Harper and Row.
  • Cuetos, F., Rodríguez, B., Ruano, E., & Arribas, D. (2007). PROLEC-R. Batería de Evaluación de los Procesos Lectores – Revisada. TEA.
  • Cuetos, F. (2008). Psicología de la lectura. Wolters Kluwer educación.
  • Daza, M. T., Ruiz-Cuadra, M. M., & Rodríguez-Cerezo, M. A. (2014). Recodificación fonológica para el reconocimiento de palabras escritas en niños sordos. In A. E. de Logopedia & F. Y. Audiología (Eds.), Logopedia: Evolución, Transformación y Futuro (pp. 219–229). AELFA.
  • De la Cruz, M. V. (2002). Pruebas de Lectura. Niveles 1 y 2. TEA.
  • De la Cruz, M. V. (2005). Evaluación de la Comprensión Lectora (ECL-2). TEA.
  • Fariña, N., Duñabeitia, J. A., & Carreiras, M. (2017). Phonological and orthographic coding in deaf skilled readers. Cognition, 168, 27–33. https://doi.org/10.1016/j.cognition.2017.06.015
  • Fletcher, M. D., Mills, S. R., & Goehring, T. (2018). Vibro-tactile enhancement of speech intelligibility in multi-talker noise for simulated cochlear implant listening. Trends in Hearing, 22, 2331216518797838. https://doi.org/10.1177/2331216518797838
  • Gallaudet Research Institute. (2004). Norms booklet for deaf and hard-of-hearing students: Stanford Achievement Test 10th. Form A. Gallaudet University.
  • Geers, A., Tobey, E., Moog, J., & Brenner, C. (2008). Long-term outcomes of cochlear implantation in the preschool years: From elementary grades to high school. International Journal of Audiology, 47(Suppl sup2), 21–30. https://doi.org/10.1080/14992020802339167
  • Hadar, U., Steiner, T. J., & Rose, F. C. (1985). Head movement during listening turns in conversation. Journal of Nonverbal Behavior, 9(4), 214–228. https://doi.org/10.1007/BF00986881
  • Haptonstall-Nykaza, T. S., & Schick, B. (2007). The transition from fingerspelling to English print: Facilitating English decoding. Journal of Deaf Studies and Deaf Education, 12(2), 172–183. https://doi.org/10.1093/deafed/enm003
  • Harris, M., & Moreno, C. (2006). Speech reading and learning to read: A comparison of 8-year-old profoundly deaf children with good and poor reading ability. Journal of Deaf Studies and Deaf Education, 11(2), 189–201. https://doi.org/10.1093/deafed/enj021
  • Hoover, W. A., & Gough, P. B. (1990). The simple view of reading. Reading and Writing: An Interdisciplinary Journal, 2(2), 127–160. https://doi.org/10.1007/BF00401799
  • Kello, C. T., & Bernstein, L. E. (2000). Phonetic structure is similar across auditory and vibrotactile speech perception. The Journal of the Acoustical Society of America, 107(5), 2888. https://doi.org/10.1121/1.428735
  • Kyle, J. G. (1980). Reading development of deaf children. Journal of Research in Reading, 3(2), 86–97. https://doi.org/10.1111/j.1467-9817.1980.tb00204.x
  • Kyte, C. S., & Johnson, C. J. (2006). The role of phonological recoding in orthographic learning. Journal of Experimental Child Psychology, 93(2), 166–185. https://doi.org/10.1016/j.jecp.2005.09.003
  • Leybaert, J. (1993). Reading in the deaf: The roles of phonological codes. In M. Marschark & D. Clark (Eds.), Psychological Perspectives in Deafness, 269–309 . Laurence Erlbaum Associates.
  • Livingstone, S. R., & Palmer, C. (2016). Head movements encode emotions during speech and song. Emotion, 16(3), 365–380. https://doi.org/10.1037/emo0000106
  • McClave, E. Z. (2000). Linguistic functions of head movements in the context of speech. Journal of Pragmatics, 32(7), 855–878. https://doi.org/10.1016/S0378-2166(99)00079-X
  • Moreno-Pérez, F. J., Saldaña, D., & Rodríguez-Ortiz, I. R. (2015). Reading efficiency of deaf and hearing people in Spanish. Journal of Deaf Studies and Deaf Education, 20(4), 374–384. https://doi.org/10.1093/deafed/env030
  • Pérez, M. A., Alameda, J. R., & Cuetos, F. (2003). Frecuencia, longitud y vecindad ortográfica de las palabras de 3 a 16 letras del Diccionario de la Lengua Española (RAE, 1992). Revista Electrónica de Metodología Aplicada, 8(2), 1–10. https://doi.org/10.17811/rema.8.2.2003.1-10
  • Perfetti, C. A., & Sandak, R. (2000). Reading optimally build on spoken language: Implications for deaf readers. Journal of Deaf Studies and Deaf Education, 5(1), 32–50. https://doi.org/10.1093/deafed/5.1.32
  • Qi, S., & Mitchell, R. E. (2012). Large-scale academic achievement testing of deaf and hard-of hearing students: Past, present, and future. Journal of Deaf Studies and Deaf Education, 17(1), 1–18. https://doi.org/10.1093/deafed/enr028
  • Rauschecker, J. P., & Tian, B. (2000). Mechanisms and streams for processing of “what” and “where” in auditory cortex. Proceedings of the National Academy of Sciences of the United States of America, 97(22), 11800–11806. https://doi.org/10.1073/pnas.97.22.11800
  • Rauschecker, J. P. (2018). Where, When, and How: Are they all sensorimotor? Towards a unified view of the dorsal pathway in vision and audition. Cortex, 98, 262–268. https://doi.org/10.1016/j.cortex.2017.10.020
  • Rezaei, M., Rashedi, V., & Morasae, E. K. (2016). Reading skills in Persian deaf children with cochlear implants and hearing aids. International Journal of Pediatric Otorhinolaryngology, 89, 1–5. https://doi.org/10.1016/j.ijporl.2016.07.010
  • Scarabello, E. M., Lamônica, D. A. C., Morettin-Zupelari, M., Tanamati, L. F., Campos, P. D., Alvarenga, K. F., & Moret, A. L. M. (2020). Language evaluation in children with pre-lingual hearing loss and cochlear implant. Brazilian Journal of Otorhinolaryngology, 86(1), 91–98. https://doi.org/10.1016/j.bjorl.2018.10.006
  • Shankweiler, D., Liberman, Y., Mark, L. S., Fowler, C. A., & Fischer, F. W. (1979). The speech code and learning to read. Journal of Experimental Psychology: Human Learning and Memory, 162(5), 531–545.
  • Share, D. L. (1995). Phonological recoding and self-teaching: Sine qua non of Reading acquisition. Cognition, 55(2), 151–218. https://doi.org/10.1016/0010-0277(94)00645-2
  • Share, D. L. (1999). Phonological recoding and orthographic learning: A direct test of the self teaching hypothesis. Journal of Experimental Child Psychology, 72(2), 95–129. https://doi.org/10.1006/jecp.1998.2481
  • Szagun, G., & Stumper, B. (2012). Age or experience? The influence of age at implantation and social and linguistic environment on language development in children with cochlear implants. Journal of Speech, Language, and Hearing Research, 55(6), 1640–1654. https://doi.org/10.1044/1092-4388(2012/11-0119)
  • Tiede, M., Mooshammer, C., & Goldstein, L. (2019). Noggin Nodding: Head movement correlates with increased effort in accelerating speech production tasks. Frontiers in Psychology, 10, 2459. https://doi.org/10.3389/fpsyg.2019.02459
  • Torres, S., & Santana, R. (2005). Reading levels of Spanish deaf students. American Annals of Deaf, 150(4), 379–387. https://doi.org/10.1353/aad.2005.0043
  • Transler, C., & Reitsma, P. (2005). Phonological coding in reading of deaf children: Pseudohomophone effects in lexical decision. British Journal of Developmental Psychology, 23(4), 525–542. https://doi.org/10.1348/026151005X26796
  • Tunmer, W. E., & Hoover, W. (1992). Cognitive and linguistic factors in learning to read. In P. B. Gough, L. C. Ehri, & R. Treiman (Eds.), Reading acquisition (pp. 175–214). Erlbaum.
  • Tyrone, M. E., & Mauk, C. E. (2016). The phonetics of head and body movement in the realization of American Sign Language Signs. Phonetica, 73(2), 120–140. https://doi.org/10.1159/000443836
  • Wang, Y., Trezek, B. J., Luckner, J. L., & Paul, P. V. (2008). The role of phonology and phonologically related skills in reading instruction for students who are deaf or hard of hearing. American Annals of the Deaf, 153(4), 396–407. https://doi.org/10.1353/aad.0.0061
  • Wauters, L. N., van Bon, W. H. J., & Tellings, A. E. J. M. (2006). Reading comprehension of Dutch deaf children. Reading and Writing, 19(1), 49–76. https://doi.org/10.1007/s11145-004-5894-0