1,739
Views
1
CrossRef citations to date
0
Altmetric
Research Article

A common neural mechanism for speech perception and movement initiation specialized for place of articulation

| (Reviewing Editor)
Article: 1233649 | Received 12 Apr 2016, Accepted 04 Sep 2016, Published online: 26 Sep 2016

Abstract

It had been proposed that there was a single shared mechanism in the left hemisphere of the brain, called the general motor programmer, that was necessary for not only the perception of speech but the initiation of hand movement also. The present study is an attempt to find out what the general motor programmer is specifically specialized for and to decipher whether timing or place of articulation is the defining feature of this proposed mechanism. It was found that the shared aspect between speech perception and movement initiation, and thus the basis of the general motor programmer, appears to be place of articulation. The findings suggest a motor programming basis for the perception of speech, which has implications for brain damage as well as for general principles of brain functioning.

Public Interest Statement

Large swathes of the brain still remain uncharted especially when it comes to understanding how the brain is organized as well as how this organization underpins the brain’s functioning. In the present study, the organization and functioning of a proposed brain mechanism—the general motor programmer—was revisited with a view to honing in on what specific aspect of brain functioning this mechanism is specialized for. It would seem from the present study that its specialization is specifically related to place of articulation. The implications for brain functioning generally as well as the implications of damage to this mechanism on the extent of cognitive deficit are outlined.

Competing Interest

The author declares no competing interest.

1. Introduction

Keane first proposed that there was a single shared brain mechanism, which was named the general motor programmer, back in 1999 (Keane, Citation1999) and that this mechanism was used to not only initiate hand movement but that it was also used to perceive speech. The notion of shared mechanisms is not new, with the idea that the same mechanism is used to both produce and perceive speech being the basis of the motor theory of speech perception (Liberman, Cooper, Shankweiler, & Studdert-Kennedy, Citation1967). What was relatively new about the proposed general motor programmer was the more general nature of this shared mechanism, in that it was not a mechanism exclusive to just speech, being shared between only speech production and speech perception, nor was it just being shared for a hand movement and a similar hand perception task, nor even for two production tasks albeit one being speech production and the other being production of a hand movement (it likely covers all these scenarios), but that the general motor programmer which was used to initiate movement that would ultimately lead to the production of a hand movement was also being used to perceive speech.

1.1. The general motor programmer: A speech perception–motor link

The proposed existence of a general motor programmer was based on the findings in the Keane study (Citation1999) where a movement of the hand, either hand, caused direct interference with the perception of speech. Generally speaking language functions are thought to be specialized in the left hemisphere of the brain (Hugdahl, Bodner, Weiss, & Benke, Citation2003; Westerhausen, Kompus, & Hugdahl, Citation2014), with studies using behavioral techniques such as dichotic listening consistently reporting a faster or more accurate response in right handers when verbal stimuli are presented to the right ear than when presented to the left ear which is known as the right ear advantage (REA). This REA, in turn, is taken to indicate a left hemisphere specialization for verbal processing (Hiscock, Inch, & Ewing, Citation2005; Rastatter & Gallaher, Citation1982). Left handers usually display smaller or more variable ear advantages, suggesting a greater bilateral organization of the brain for verbal functioning (Bryden, Citation1986; Geffen & Traub, Citation1979). In the Keane study, which was a behavioral study designed to ascertain the relative capacity of each hemisphere for verbal processing, interference and the absence of this REA was unexpectedly found for a verbal perception task in right and left handers with a consistent hand preference.

While usual, and indeed expected, that an REA will be found for verbal stimuli using behavioral techniques, these perceptual advantages are able only to inform us of the verbal ability of the left hemisphere and reveal very little about the verbal processing capacity of the right which was why a response procedure set out by Moscovitch that requires a manual response from the left hand, as well as the right (Moscovitch, Citation1973) following stimulus presentation to each ear was used in the Keane study to assess the cognitive functioning capacity of both hemispheres. This in effect meant there were four conditions: right ear stimulation/right hand response; right ear stimulation/left hand response; left ear stimulation/right hand response; left ear stimulation/left hand response; as opposed to the more usual two conditions: right ear stimulation/preferred hand response and left ear stimulation/preferred hand response. Using this procedure then, Keane (Citation1999), like that of previous studies (Rastatter & Gallaher, Citation1982; Studdert-Kennedy & Shankweiler, Citation1970), did find the expected REA for verbal processing but the REA was only found for those people with a strong hand preference, left and right, and not for those with a weak hand preference. However, the REA found was related to the person’s degree or strength of hand preference, with only those right and left handers with a strong but inconsistent hand preference being the ones to display the REA. In contrast, both the stronger right and left handers with a consistent hand preference did not show the expected REA, displaying either no ear advantage or more by way of a left ear advantage (LEA) instead.

The expectation with regards to the association between the REA and hand preference is that the stronger and more consistent the hand preference, the greater the REA should be (Khedr, Hamed, Said, & Basahi Citation2002; Sheehan & Smith Citation1986), especially for right handers. So the lack of the expected REA in the more strongly handed consistent handers, suggested that the requirement of having to make a manual response was causing interference in the left hemisphere with processing of the verbal stimuli, thereby negating any advantage the left hemisphere would normally have for verbal processing.

Furthermore, it was found that the verbal processing task interfered with not just one hand but with both hands equally. This effect on the response of both the right and the left hand suggests that the interference was not at the level of preferred hand control, nor was it at the level of final motor execution (both of these would have resulted in ear by hand interactions, but none were found), but was instead at a higher level of control, above or prior to any level concerned with right and left (i.e. sides/hands), that is it was a level involved in the initiation, planning and programming of movement, with control over both hands (Kimura & Archibald, Citation1974; Liepmann, Citation1980). Therefore it is at the level of programming and movement initiation that is being shared with speech perception. Additionally, while brain-injury studies had led to the presumption that general motor programming was based in the left hemisphere (Kimura Citation1977, Citation1982; Liepmann, Citation1980), the findings from the Keane study show that this is only the case for those strong handers with a consistent hand preference. Those right and left handers with a strong but inconsistent hand preference seem to be less reliant on the left hemisphere for general motor programming, having instead a greater bilateral brain organization for this function.

1.2. Timing or place of articulation?

What exactly this common neural mechanism (i.e. the general motor programmer) is specialized for is not known. However, past research concerning the possible left hemisphere specializations that both hand movement and speech could both be based on has centerd around its being either the positioning of the musculature or some aspect of timing (Bradshaw & Nettleton, Citation1981; Jamison, Watkins, Bishop, & Matthews, Citation2006; Kimura, Citation1977; Ojemann, Citation1984; Perkell, Citation2012). That the specialization may be for timing, especially for rapidly changing temporal cues, was shown in a stop consonant–vowel identification study involving varying rates of acoustic change, where a left hemisphere advantage was found for the stop consonants when they were presented at a rapid rate, but a significantly reduced advantage was found when the rate at which they were presented was slower (Schwartz & Tallal, Citation1980). In a similar vein but using non-speech stimuli, it has been shown that while both hemispheres are activated for processing temporal information (Boemio, Fromm, Braun, & Poeppel, Citation2005), a greater activation of the right hemisphere was found when the stimuli comprised longer, and thus slower, duration segments (Scott & McGettigan, Citation2013). That the shared aspect is timing is also suggested by the finding of an association between degree of receptive language deficits and non-verbal temporal analysis deficits in those with left hemisphere damage, where it was found that those who had greater impairment in temporal analysis also had more problems with receptive language abilities (Tallal, Citation1983; Tallal & Newcombe, Citation1978).

However, while lesion and neuroimaging studies suggest it is the left hemisphere that plays the greater role in processing temporal information (Schirmer, Citation2004), the suggestion that the left hemisphere specialization is for a timing aspect of some sort is disputed (Watson & Kimura, Citation1989), with rapid positioning and re-positioning of the musculature instead being put forward as the specialization of the left hemisphere (Lomas & Kimura, Citation1976; McFarland, Ashton, & Jeffery, Citation1989). Lending support for a left hemisphere specialization for placing or positioning of the articulators (manual-brachial musculature as well as oral musculature) comes from studies involving people with brain damage (Kimura, Citation1977), where it has been found that those with damage to the left hemisphere experience particular difficulty when the limb-positioning manual task that had to be performed necessitated several different movements of the hand, but had little difficulty when it was the same movement of the hand that had to be repeated over and over. Similarly, a study on oral movements (Mateer & Kimura, Citation1977) revealed the same difficulty when multiple placing of the musculature was involved in that it was found that while those who had fluent aphasia displayed no deficit when repeatedly producing the same oral movement (ba, ba, ba), they were however significantly impaired in relation to non-aphasic groups when they had to produce multiple oral movements that differed (ba, da, ga).

The specific speech features of voicing and place of articulation, which are taken as timing, and positioning of the musculature, respectively, have also been investigated in speech perception studies with the aim of determining which of these is more reliant on the left hemisphere and possibly forming the basis of the left hemisphere asymmetry for language. Voicing refers to the vibration of the vocal cords, with the perception of temporal order of vocal cord vibration relative to consonant release viewed as the timing aspect of speech, while place of articulation refers to the place or point along the vocal tract that is closed, thus restricting the flow of air to produce a sound (Wallwork, Citation1985). In studies comparing right and left hemisphere-damaged patients on their relative reliance on voicing and place of articulation cues for perceiving speech, it has been found that left hemisphere damaged patients perform significantly worse when the analysis depends on place of articulation relying instead more on the voicing cue, while right brain-damaged patients made about equal use of both place and voicing features which suggests that while the left hemisphere may be specialized for place, both hemispheres may be capable of processing the voicing feature of speech (Oscar-Berman, Zurif, & Blumstein, Citation1975; Perecman & Kellar, Citation1981). Similarly, studies of the neurologically intact usually find a left hemisphere advantage for place distinctions (Morais & Darwin, Citation1974; Voyer & Techentin, Citation2009), but either a left hemisphere or more of a bilateral ability for the voicing dimension (Laguitton, De Graaf, Chauvel, & Liégeois-Chauvel, Citation2000; Simos, Molfese, & Brenden, Citation1997).

1.3. The present study

While the Keane (Citation1999) study shows both speech perception and hand movements share the same general motor programming mechanism, what has not yet been determined is why they need to share this mechanism, and what it is specifically they have in common that necessitates both speech perception and movement initiation to require its use. An answer to this should clarify what the general motor programmer is actually specialized for. Voicing and place of articulation were not specifically investigated in the Keane study, however, the verbal task that had been used in the study that had caused the interference with initiating a hand movement called for a decision on whether or not two consonant–vowel syllables differed with regards to place of articulation or voicing. Pairs of consonant–vowel syllables were presented to left and right handers who had to decide if the two consonant–vowel syllables were either the same or different and to then make a manual response. It was only when the two consonant–vowel syllables presented were not the same that the difference in interference between those with a consistent and those with a strong but inconsistent hand preference was evident.

Therefore, the initiation of a manual response and the perception of speech would seem to be both based on some feature of cognitive functioning that is necessary in order to determine if two consonant–vowel syllables are different. The consonant–vowel syllables were presented in the pairings: /ba/–/pa/; /ba/–/ga/; and /ba/–/ka/, with the /ba/–/pa/ contrast differing for voicing (timing) but not place of articulation; the contrast /ba/ vs /ga/ differing for place of articulation but not voicing; while the /ba/–/ka/ contrast differed for both voicing and place of articulation. The original Keane (Citation1999) data thus give us an opportunity to disentangle the specific speech features of place of articulation and timing enabling us to determine which of these two features is the basis of this common neural mechanism (Keane, Citation2012).

Consequently, the aim of the current study was to investigate the data from the Keane study further in order to determine which feature of speech perception, namely voicing or place of articulation, is specifically interfered with when performed in conjunction with the hand response, so as to ascertain if it is timing or place of articulation that is the defining feature of the general motor programmer.

2. Method

2.1. Participants

In the Keane (Citation1999) study, university students taking psychology completed a handedness questionnaire following which all those who unambiguously met the criteria with regards to direction of hand preference, degree of hand preference, and familial handedness were included in the study. This though did not yield enough left handers and so additional left handers came as a result of those students who responded to a general advert in the university requesting left handers to take part in a psychology study. Those left handers who responded to the advert were informed that they would be taking part in a reaction-time study that involved responding manually to aurally presented stimuli. Again, all those who responded to the advert and who clearly met the criteria with regards to direction and degree of hand preference as well as familial handedness were included in the original study.

Given that it was only in the strong right and strong left handers that lateralization of the general motor programmer was found to be related to consistency of hand preference, it was only those with a strong hand preference, right and left (as measured by the Edinburgh Handedness Inventory (EHI) (Oldfield, Citation1971), with EHI scores of +51 to +100 and −51 to −100 and not the weak handers (who had EHI scores of 0 to +50 and 0 to −50) that were included in the current study (mean age of 20.4 years, SD = 1.83 years). Therefore, there were 20 strong right-handed participants, 10 of which had a consistent hand preference and 10 had an inconsistent strong right hand preference. There were also 20 strong left-handed participants, of which 6 had a consistent hand preference and 14 had an inconsistent hand preference. Once a participant had been fully informed as to what specifically their involvement would entail and had consented to taking part in the study, the participant arranged a date and time suitable for them to attend the experiment, with each participant being tested individually using a non-invasive procedure (listening to pairs of consonant–vowel syllables and then manually responding). English was the first and main language for all participants, and no history of neurological damage was reported for any participant.

To ensure any REA or LEA that may be subsequently found in the experiment was not due to better hearing in one ear over the other, each participant’s hearing was tested by determining descending and ascending thresholds for each ear for the pure tones: 500, 1,000, 2,000, and 4,000 Hz. Based on the resulting audiograms, no participant had a difference in threshold between their right and left ear of more than 5 dB.

2.2. Hand preference classification

The participant’s response to the EHI (Oldfield, Citation1971) was used as the basis on which the participants were classified as consistent right and left handers, and as inconsistent strong right and left handers. The EHI assesses the hand used for the following items—writing; drawing; throwing; use of a scissors; toothbrush; knife (without the fork); spoon; broom (upper hand on handle); striking a match (the match); and opening a box (the lid). An EHI handedness score was calculated for each participant, with scores ranging from +100 (right-handed for all items) to −100 (left-handed for all items). The EHI measures the degree of hand preference in addition to direction (right/left) with positive scores indicating a right hand preference and negative scores a left hand preference, and with the higher scores indicating a strong hand preference while the lower scores indicate a weaker preference for the right or left hand.

The consistent right handers were those participants who wrote as well as carried out all the other activities on the EHI with their right hand, having an EHI score of +100. The inconsistent strong right handers were those who although they wrote with their right hand and had a strong degree of hand preference with scores ranging between +51 and +99 nonetheless used their left hand to perform one or more of the other activities on the EHI (M = +82.6, SD = 7.15). Similarly, the consistent left handers were those who performed all the EHI activities with their left hand having an EHI score of −100, while the inconsistent strong left handers were those who used their left hand for writing but used their right hand to carry out one or more of the other activities on the EHI with scores ranging from −51 to −99 (M = −74.6, SD = 9.97). Furthermore, those participants who for any reason had changed their hand preference earlier in life, or whose degree of hand preference had been influenced by any physical incapacity were not included in the study.

2.3. Stimuli

The verbal stimuli, which consisted of the four stop consonant-vowel syllables /ba/, /pa/, /ga/, and /ka/, were presented in pairs, with /ba/ constituting the initial consonant–vowel syllable of each pair throughout the experiment. The two consonant–vowel syllables were the same if the initial consonant–vowel syllable presented, that is /ba/ was followed again by /ba/, while the two consonant–vowel syllables were different when the initial /ba/ was followed by either /pa/, /ga/, or /ka/. The consonant–vowel contrasts differed in terms of the linguistic features of voicing and place of articulation of the consonants. There were three possible contrasts: /ba/ vs /pa/—both share the same place of articulation but differ with regards to voicing (/ba/ is voiced, /pa/ is voiceless); /ba/ vs /ga/—these are both voiced but differ in place of articulation (/ba/ is labial, /ga/ is velar); and /ba/ vs /ka/—these differ in both place of articulation (labial/velar) and in voicing (voiced/voiceless; see Table ). The interval within pairs of stimuli was varied from 400 ms to one second, while the time between pairs was varied from 3 to 6 s. The pairs of stimuli were presented in four blocks of trials, one for each ear-stimulated/hand-of-response condition: (1) right ear stimulation—right hand response; (2) right ear stimulation—left hand response; (3) left ear stimulation—right hand response; (4) left ear stimulation—left hand response. The stimuli (presented at an average intensity of 85 dB SPL) were presented in this block format to control for the voluntary attention of the participant. Also, within each block of trials, half of the consonant–vowel syllable pairings were the same and half were different with all blocks of trials consisting of the same stimulus pairs but in a different order.

Table 1. Speech sound contrasts in terms of whether they share or differ for the features voicing and place of articulation

2.4. Apparatus

Each participant was seated comfortably at a table on which directly in front of them was situated the two-way switch that was used by the participant to respond to the stimuli presented to them via earphones. The consonant–vowel syllable pairs were presented to both the participant’s and experimenter’s earphones with the auditory signal also being sent to a voice-activated relay which activated a digital timer. The digital timer which was used to record the participant’s reaction time was then stopped when the participant manually responded by making either a forward or backward movement of the switch. In order to achieve monaural stimulation only one channel of the earphones was used (Keane, Citation1999).

2.5. Procedure

A choice reaction-time paradigm was used to elicit the manual response to the consonant–vowel syllables presented, with each participant being tested individually. Upon hearing two consonant–vowel syllables in quick succession (via earphones), half of the participants moved a switch (held between the thumb and fingers) either towards their body if the two stimuli were the same, or away from their body if the two consonant–vowel syllables they heard were different from each other. The other half of the participants responded in a counterbalanced fashion by moving the switch away from their body if the two stimuli were the same and towards their body if they were different, with each participant keeping to the same movement–direction rule for the duration of the experiment.

Forty-eight responses to the stimulus pairs were made by each participant, with 12 of the responses being made in each of the four conditions which were: stimulus presentation to the right ear with a right hand response, stimulus presentation to the right ear with a left hand response, stimulus presentation to the left ear with a right hand response, and stimulus presentation to the left ear with a left hand response. The ear the stimuli were presented to as well as the hand used to respond were counterbalanced within each handedness category, with all participants being instructed to respond as fast and as accurately as possible. Each participant also had a practice session prior to the experiment proper so that they were familiar with the reaction-time task and all the stimuli used (Keane, Citation1999). The original study received clearance from the National University of Ireland, Galway, Ireland.

3. Results

3.1. Initial analysis

The analysis carried out in the Keane (Citation1999) study on the reaction times for the perception of the verbal stimuli when they were different (i.e. all consonant–vowel contrasts combined), with ear (right ear, left ear) and hand used to respond (right hand, left hand) as the within-subject variables, and degree of hand preference (consistent vs inconsistent) as the between-subjects factor showed a significant ear by degree of hand preference interaction for both the strong right handers, and strong left-handers, with post hoc analyses indicating an REA for the inconsistent strong handers and a lack of the REA or more of an LEA for the consistent handers. Findings for the verbal stimuli when they were the same i.e. /ba/ vs /ba/, are reported elsewhere (Keane, Citation2002).This REA for the perception of consonant–vowels in the strong right and strong left handers with an inconsistent hand preference but an absence of the expected REA and a greater LEA for those with a consistent hand preference, who if anything would be expected to show more of an REA, not less (Khedr et al., Citation2002; Sheehan & Smith, Citation1986), suggested that having to process the verbal stimuli when they were different and make a response was causing interference in the consistent handers. There were no significant, or even near significant, ear by hand response interactions indicating the interference was equally evident for both hands. With the interference being equal to both hands suggests that the verbal processing was interfering with hand control at a very early level, most likely at the level of movement initiation. Familial sinistrality (FS), that is the presence of left-handed family members, had also been taken into account in the original Keane study. The analysis of variance that included FS (with FS, without FS) as well as degree of hand preference as a between-subjects factor revealed no significant main effect or significant interactions for the FS variable which suggests that while FS may play a role in the lateralization of some levels of hand control (Keane, Citation2001, Citation2002), FS is not a factor at the level of the general motor programmer.

3.2. Current analysis

The specific concern of the present study was to determine if the interference between perception of consonant–vowel contrasts and hand response in the consistent handers is due to interference with the voicing aspect of speech (/ba/, /pa/ contrast) or due to the place of articulation aspect of speech (/ba/, /ga/ contrast). The consistent handers (right and left handers combined) were compared to the inconsistent strong handers (right and left handers combined) for each of the consonant–vowel contrasts. The strong right and strong left handers were combined in the present study because the initial analysis showed the same ear by degree of hand preference interaction for both the strong right and left handers, in that both showed an REA for the inconsistent handers and more of an LEA for the consistent handers. As there were no interactions for hand of response in the initial analysis, the data for the present study consisted of right ear and left ear reaction times with right and left hand responses combined. See Table which shows the reaction times for each ear when speech perception is based on voicing (/ba/-/pa/ contrast), place (/ba/-/ga/ contrast), and the voicing plus place (/ba/-/ka/ contrast) features of speech. It can be seen that the reaction times are slightly faster for the right ear in both the inconsistent strong handers and the consistent handers for the voice contrast. However for the place contrast, the faster right ear reaction time (i.e. REA) shown in the inconsistent strong handers is not displayed by the consistent handers who instead show a longer reaction time for the right ear relative to the left ear. A longer right ear reaction time relative to the left ear is again apparent for the consistent handers when speech perception involves both the voicing and place features of speech.

Table 2. Reaction times to right and left ear consonant–vowel contrasts differing in the features of voicing and place of articulation for the consistent and inconsistent strong handers

A MANOVA with repeated measures was conducted with degree of hand preference (consistent vs inconsistent) as the between-subjects variable and ear (right, left) as the within-subject factor with the consonant–vowel contrasts (/ba/ vs /pa/; /ba/ vs /ga/; /ba/ vs /ka/) as the dependent variables. No significant main effect of ear [F(3, 36) = 0.76, p = 0.52, ηp2 = 0.059], or degree of hand preference [F(3, 36) = 0.22, p = 0.89, ηp2 = 0.018], was found. However, there was a significant ear by degree of hand preference interaction [F(3, 36) = 2.84, p = 0.05, ηp2 = 0.191], with post hoc testing confirming again, as in the initial analysis, an REA for verbal processing (all consonant–vowel contrasts combined) in the inconsistent strong handers (t(23) = 2.2, p = 0.037) and more of an LEA for those with a consistent hand preference (t(15) = 1.6, p = 0.123). Univariate ANOVAs showed that while there was no significant ear by degree of hand preference interaction for the /ba/ vs /pa/ contrast (i.e. voicing) [F(1, 38) = 0.13, p = 0.73, ηp2 = 0.003], there was a significant ear by degree of hand preference interaction when the contrast was /ba/ vs /ga/ [F(1, 38) = 5.79, p = 0.02, ηp2 = 0.132], which shows that the REA for the inconsistent strong handers (t(23) = 2.65, p = 0.01) was absent for the consistent handers becoming more of an LEA when the consonant–vowel contrasts involved place of articulation (t(15) = 1.06, p = 0.31. See Figure —error bars reflect ± SE of the mean). The ear by degree of hand preference interaction was also significant for the /ba/ vs /ka/ contrast [F(1, 38) = 6.47, p = 0.02, ηp2 = 0.146], showing again, the more of an REA for the inconsistent handers (t(23) = 1.13, p = 0.27) reversing to an LEA in the consistent handers (t(15) = 2.4, p = 0.03) when the contrast involved both place of articulation and voicing.

Figure 1. (A) Mean reaction times for the /ba /vs. /pa/ (voicing) contrast when presented to each ear for the consistent handers and the inconsistent strong handers. (B) The interaction between degree of hand preference and ear stimuli presented to when the consonant–vowel contrast involved place of articulation (/ba /vs. /ga/), with the inconsistent strong handers showing a faster reaction time for the right ear relative to the left (REA) while the consistent handers are not displaying the REA showing instead a longer reaction time for the right ear relative to the left. (C) An interaction again between degree of hand preference and ear stimulated when the contrast was place of articulation plus voicing (/ba /vs. /ka/).

Figure 1. (A) Mean reaction times for the /ba /vs. /pa/ (voicing) contrast when presented to each ear for the consistent handers and the inconsistent strong handers. (B) The interaction between degree of hand preference and ear stimuli presented to when the consonant–vowel contrast involved place of articulation (/ba /vs. /ga/), with the inconsistent strong handers showing a faster reaction time for the right ear relative to the left (REA) while the consistent handers are not displaying the REA showing instead a longer reaction time for the right ear relative to the left. (C) An interaction again between degree of hand preference and ear stimulated when the contrast was place of articulation plus voicing (/ba /vs. /ka/).

In comparison to the power of the original study which was over 86%, the power of the present study was not as high, however the chance of detecting an effect the size of that found for the place discrimination was still 65% (Faul, Erdfelder, Lang, & Buchner, Citation2007). Nonetheless, while there was a moderate to large effect found when speech perception involved the place of articulation aspect of speech (ηp2 = 0.132), the difference between these same consistent handers and inconsistent strong handers in ear advantage for the voicing aspect was not only not significant (nor approaching significance), there was also what would be considered no effect shown at all for the voicing feature (ηp2 = 0.003), not even a small or trivial effect (Cohen, Citation1988).

4. Discussion

4.1. The specialism of the general motor programmer

The present study was aimed at investigating what specifically the proposed general motor programmer is specialized for. To this end, it was attempted to pinpoint what it was that perception of speech and the initiation of a hand movement had in common that would result in interference when both speech perception and hand movement had to be carried out at the same time. The findings from the present study indicate that the source of the interference and likely cause of the reversal of the REA in the consistent handers appears to be due to both hand movement initiation and speech perception vying for a single mechanism, which seems to be specialized for place of articulation.

Given the interference was with both hands equally, this indicates that the level of interference is not with the more specific control of the hands, including hand preference, nor with the contralaterally controlled execution level (as interference at either of these levels should lead to greater interference of one hand over the other) but is at an earlier level of hand control, prior to any idea of which hand will carry out the movement, that is at the level of a general motor programming mechanism of motor cognition that has control of the initiation of movement at an early “whole” movement stage (Keane, Citation1999, Citation2008; Kimura & Archibald, Citation1974). The current findings show that the conception of hand movements and the perception of speech via the general motor programming mechanism appears to be achieved specifically with respect to place of articulation. This dependence on the left hemisphere for processing the place of articulation feature of speech supports findings from both brain-damaged and normal participants which have found greater reliance on the left hemisphere for processing place of articulation cues (Morais & Darwin, Citation1974; Oscar-Berman et al., Citation1975; Voyer & Techentin, Citation2009). The direct interference in the left hemisphere found for the consistent handers was not found for the inconsistent strong handers, which suggests that those with a less consistent hand preference may have a general motor programming ability in both hemispheres, in which case previous studies finding a bilateral effect for place of articulation (e.g. Segalowitz & Cohen, Citation1989) were likely comprised of participants with a less consistent hand preference. Also, as the present study suggests interference to be dependent on the perception of a specific feature of speech, namely place of articulation, this would support previous findings which have shown that different aspects of speech are perceived and processed independently of one another (Studdert-Kennedy & Shankweiler, Citation1970).

As the present findings are being driven by the place of articulation feature of speech, the apparent lack of interference with perception of the voicing aspect shows that this feature which is generally viewed as being related to the timing aspect of speech is not the defining feature of the general motor programmer. This does not mean that the left hemisphere does not have a timing advantage as obviously there must be a timing mechanism for speech, which may or may not be shared with handedness (Lashley, Citation1937; Wolff, Michel, Ovrut, & Drake, Citation1990) and perhaps with other cognitive processing as well (Casini, Burle, & Nguyen, Citation2009), however, processing of this aspect of speech is probably at another level to that of the general motor programmer. The power of the present study was not as high as what would conventionally be considered optimal, thus there is a need for replication of the present study with a larger sample.

An important point about the interference found between hand movement and speech is that the interference was not between hand movement and a speech production movement, but that the interference was between the production of a hand movement and the perception of speech. Therefore, at the general motor programming level of brain functioning, perception and production would seem to be making use of the same mechanism, which suggests that speech is likely perceived with reference to its production. That speech is perceived with reference to how it is produced would lend support to some form of motor theory of speech perception (Liberman et al., Citation1967) in which it has been suggested there is a special mechanism that serves as a decoder for speech and that this mechanism operates by reference to the process of speech production. It has been suggested that the speech sound is used as a basis to finding the way back to the articulatory gesture that produced it or as Liberman et al. (Citation1967) have said perception involves “running the process backward” (p. 454). While speech may be perceived by reference to its production, it was not assumed to be occurring at the lower level-specific control of the articulatory movements, but “upstream” (Liberman, Citation1982, p. 158) at a higher or more remote level, in the neural structure that initiated the intended movement (Liberman & Mattingly, Citation1985). The general motor programming mechanism of the present study looks very much like the assumed remote structure proposed by Liberman and others that is used to perceive speech. It is apparent, however, that this mechanism, contrary to that proposed by the motor theory, is not specific to just speech because this mechanism was not only used to perceive speech but it was also used to initiate hand movement as well. This common mechanism then between speech perception and programming of hand movements seems to be specialized for articulatory movements, not at the level of specific articulatory control, but at an earlier, higher level of the intention or goal of the articulatory movement. By finding that place of articulation is the basis on which speech is perceived at the general motor programming level, not only suggests speech is perceived by reference to its production, but that the mechanism used to perceive speech may be specialized to detect the intended gestures of the speaker (Liberman & Mattingly, Citation1985).

As has long been posited (Lashley, Citation1951; Liepmann, Citation1980; Roy & Square, Citation1985), it would seem that both action and speech likely do share the same mechanism at the conceptual goal-orientated level of motor control (Keane, Citation1999). The general motor programmer, being a proposed neural mechanism shared between speech perception and the initiation level of hand movement, appears to be concerned with determining the intention or goal of the articulators. That this shared mechanism is specifically concerned with the place of articulation aspect of speech suggests that the goal or relationship between the articulators may be spatial (i.e. the spatial positioning of the articulators). Lashley (Citation1951) posited a space coordinate system as the basis of all movement, with the system allowing us to remember one location with reference to other locations. The spatial location refers not so much to the external spatial environment but the internal spatial relations between articulators, that is the positioning of one body part in relation to others (Fitzgerald, McKelvey, & Szeligo, Citation2002; Kimura, Citation1977). Lashley’s contention that it is the goal or end location that is of primary importance and not the motor programs on how to achieve it (Russell, Citation1976) is again suggestive of this general motor programming mechanism, which is primarily concerned with initiating the intention or goal of the articulatory movements, whereas the more precise articulatory instructions are realized at the next level down in the motor control hierarchy i.e. at the specific motor programming levels of speech and manual control, which in turn precede final motor execution (Keane, Citation2001).

There has been a recently renewed interest, especially with regards to neuroimaging studies, in the role the motor system plays in the perception of speech. Studies investigating the activity in motor areas during speech perception have shown that passively listening to phonemes and syllables activated motor and pre-motor speech production areas of the brain (Wilson, Saygin, Sereno, & Iacoboni, Citation2004), with this activation being somatotopically organized according to the different articulators used in the production of the speech sounds (Pulvermuller et al., Citation2006). These studies show that with regards to speech there would seem to be a much closer link between speech production and speech perception than previously thought (D’Ausilio, Craighero, & Fadiga, Citation2012). But, while these studies generally reveal the involvement of motor and pre-motor areas (Devlin & Aydelott, Citation2009), there is no consensus as to whether these areas are essential for speech perception or are mainly still involved in overt speech production (Szenkovits, Peelle, Norris, & Davis, Citation2012) as while the lip and tongue areas of the motor cortex have been found to be sensitive to movement of the articulators, there does not seem to be the same consistency of response in these areas during the perception of speech (Arsenault & Buchsbaum, Citation2016). This would suggest that the motor and pre-motor areas of the brain are not the site of the general motor programmer, it is more likely that it is located in a more parietal or temporal–parietal region of the brain (Marstaller & Burianova, Citation2014). However, the less clear-cut distinction between speech perception and speech production found in brain imaging studies, would lend support to there being common mechanisms in the brain shared by both perception and production (Pickering & Garrod, Citation2013).

4.2. Implications for brain damage

Dysfunction of the areas of the brain involved in speech leads to aphasia and aphasia has also long been associated with apraxia (Meador et al., Citation1999). If it is the case that speech and praxis are sharing the same general motor programming mechanism, then this would have implications for the association of apraxia and aphasia, in that both should be most acute following direct damage to this mechanism, especially in the consistent handers. Damage to either speech or hand control at the level below the general motor programmer might be expected to lead to either aphasia or apraxia, but not necessarily the co-occurrence of the two. That the proposed general motor programmer is involved in the perception of speech and most likely the production of speech as well would also have implications with regards to the type of aphasia that would result from direct damage to this level of motor cognition. That is, specific damage to this general motor programming mechanism should result in both perception and production difficulties, most akin to Wernicke’s type aphasia, whereas Broca’s aphasia may be the result of damage to the more specific speech level below this mechanism and should not involve comprehension difficulties, or alternatively Broca’s could be due to damage to the timing aspect of speech (Baum, Citation2002; Blumstein, Cooper, Zurif, & Caramazza, Citation1977; Long et al., Citation2016; Moineau, Dronkers, & Bates, Citation2005; Platel et al., Citation1997). Similarly for apraxia, specific damage to the general motor programming mechanism should result in both perception and likely production difficulties as well, probably most akin to conceptual/ideational apraxia, whereas damage to the more specific motor programming of the hands would lead to bilateral hand movement difficulties most akin to ideomotor apraxia leaving comprehension of gesture and language intact.

Given the conceptual nature of the general motor programmer, this would suggest that the neural substrate for this mechanism is most likely located in the temporal–parietal area of the brain as opposed to a pre-central location, with the more specific speech and hand processing areas located in the more frontal regions of the brain (Fridriksson et al., Citation2008). Support for a more posterior location for the general motor programmer comes from lesion studies that find impaired discrimination of gestures following posterior but not anterior left hemisphere injury (Heilman, Rothi, & Valenstein, Citation1982). Whereas support for an anterior location for the more speech-specific processing comes from neuroimaging results showing pre-central cortex activation during speech specific articulation, but not post-central activation (Pulvermuller et al., Citation2006). So while the pre-motor cortex is often reported as being involved in speech perception (e.g. Pulvermuller et al., Citation2006), it is unlikely that the general motor programmer is located in this part of the brain, largely because it is a more general mechanism being responsible for perceiving and initiating other movements besides speech (i.e. hand movements), which makes it more likely that the pre-motor cortex is the location of a specific speech processor that receives and decodes the speech-specific programs from the general motor programmer. Crucially though, if damage to the general motor programmer does result in both aphasia and apraxia then the extent of the damage should differ between the consistent and inconsistent strong handers, in that a person with an inconsistent strong hand preference—who would have a general motor programming capacity in both hemispheres—should not suffer the same extent of deficit as a person with a consistent hand preference who sustains direct damage to the general motor programmer in the left hemisphere.

4.3. Implications for general brain functioning

The current findings have implications for brain functioning generally in that they support the contention that at a certain level perception and production are inseparable processes (MacKay, Citation1987; Pickering & Garrod, Citation2013). In addition, while the general motor programmer likely does underlie both the perception and production of speech (Liberman & Mattingly, Citation1985), it is not specific to just speech but underlies hand movements as well, and probably all gestures and action. Perception being achieved by reference to production, in the current instance via the proposed general motor programmer, may be a general feature of cognitive processing in the brain. It may be the case that at the higher levels of brain functioning, a few general or core perception-production processors underlie all cognitive functioning, with these being applied to both verbal and non-verbal functioning alike. Future research therefore should be directed at exploring what other cognitive functioning the general motor programmer may underpin, in addition to determining the nature and extent to which other perception-production mechanisms exist. Also, given that the organization of the general motor programmer is dependent on consistency of hand preference, it would be advisable for future studies to include not only the direction but the degree of hand preference as well when determining the organization of the brain for cognitive functioning.

5. Conclusion

Perception of speech is related to bilateral hand control at the level of the proposed general motor programmer because they share a common aspect of cognitive processing. That common aspect and thus the defining feature of the general motor programmer appears to be place of articulation. Given the general motor programmer, which is necessary for the initiation of hand movement, is also used to perceive speech suggests that perception of speech is achieved via processes that are also involved in its production. Direct damage therefore to this proposed mechanism should result in both apraxic and aphasic difficulties. In addition, as perception of speech occurs at a higher level to the actual realization of speech articulation, this would have implications for the type of aphasia that would result from damage to the general motor programmer, in that damage to the general motor programmer should lead to both production and perception difficulties while damage to the lower level of speech production should not necessarily affect speech perception. Similarly for apraxia, damage to the general motor programmer should result in conceptual difficulties whereas damage to the more specific hand programming areas need not result in difficulties at a conceptual level. The organization of the general motor programmer is also related to degree of handedness in that while consistent handers have general motor programming specialized in the left hemisphere, inconsistent strong handers would appear to have a general motor programming capacity in both hemispheres. That both speech perception and hand movement are so closely linked highlights the importance of the motor system for perceiving speech.

Additional information

Funding

Funding. The author received no direct funding for this research.

Notes on contributors

A.M. Keane

My overall research activity has centered on trying to determine how the brain is organized and functions especially with regards to language and movement, as well as how this cerebral organization and cognitive functioning is related to handedness. The finding that both the initiation of hand movement and the perception of speech both rely on a common or shared mechanism—that proposed mechanism being the general motor programmer (Keane, 1999) has led to the focus of the present study in which an attempt is made to find out what it is exactly that this shared mechanism is specialized for that necessitates its use in both perceiving speech and initiating movement.

References

  • Arsenault, J. S., & Buchsbaum, B. R. (2016). No evidence of somatotopic place of articulation feature mapping in motor cortex during passive speech perception. Psychonomic Bulletin & Review, 23, 1231–1240. doi:10.3758/s13423-015-0988-z
  • Baum, S. R. (2002). Consonant and vowel discrimination by brain-damaged individuals: Effects of phonological segmentation. Journal of Neurolinguistics, 15, 447–461. doi:10.1016/S0911-6044(00)00020-8
  • Blumstein, S. E., Cooper, W. E., Zurif, E. B., & Caramazza, A. (1977). The perception and production of voice-onset time in aphasia. Neuropsychologia, 15, 371–383. doi:10.1016/0028-3932(77)90089-6
  • Boemio, A., Fromm, S., Braun, A., & Poeppel, D. (2005). Hierarchical and asymmetric temporal sensitivity in human auditory cortices. Nature Neuroscience, 8, 389–395. doi:10.1038/nn1409
  • Bradshaw, J. L., & Nettleton, N. C. (1981). The nature of hemispheric specialization in man. Behavioral and Brain Sciences, 4, 51–91. doi:10.1017/S0140525X00007548
  • Bryden, M. P. (1986). Dichotic listening performance, cognitive ability, and cerebral organization. Canadian Journal of Psychology, 40, 445–456. doi:10.1037/h0080108
  • Casini, L., Burle, B., & Nguyen, N. (2009). Speech perception engages a general timer: Evidence from a divided attention word identification task. Cognition, 112, 318–322. doi:10.1016/j.cognition.2009.04.005
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.
  • D’Ausilio, A., Craighero, L., & Fadiga, L. (2012). The contribution of the frontal lobe to the perception of speech. Journal of Neurolinguistics, 25, 328–335. doi:10.1016/j.jneuroling.2010.02.003
  • Devlin, J., & Aydelott, J. (2009). Speech perception: Motoric contributions versus the motor theory. Current Biology, 19, R198–R200. doi:10.1016/j.cub.2009.01.005
  • Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175–191. doi:10.3758/BF03193146
  • Fitzgerald, L., McKelvey, J., & Szeligo, F. (2002). Mechanisms of dressing apraxia: A case study. Neuropsychiatry, neuropsychology, and behavioral neurology, 15, 148–155.
  • Fridriksson, J., Moss, J., Davis, B., Baylis, G., Bonilha, L., & Rorden, C. (2008). Motor speech perception modulates the cortical language areas. Neuroimage, 41, 605–613. doi:10.1016/j.neuroimage.2008.02.046
  • Geffen, G., & Traub, E. (1979). Preferred hand and familial sinistrality in dichotic monitoring. Neuropsychologia, 17, 527–531. doi:10.1016/0028-3932(79)90061-7
  • Heilman, K., Rothi, L., & Valenstein, E. (1982). Two forms of ideomotor apraxia. Neurology, 32, 342–346. doi:10.1212/WNL.32.4.342
  • Hiscock, M., Inch, R., & Ewing, C. (2005). Constant and variable aspects of the dichotic listening right-ear advantage: A comparison of standard and signal detection tasks. Laterality, 10, 517–534. doi:10.1080/13576500442000283
  • Hugdahl, K., Bodner, T., Weiss, E., & Benke, T. (2003). Dichotic listening performance and frontal lobe function. Cognitive Brain Research, 16, 58–65. doi:10.1016/S0926-6410(02)00210-0
  • Jamison, H. L., Watkins, K. E., Bishop, D. V., & Matthews, P. (2006). Hemispheric specialization for processing auditory nonspeech stimuli. Cerebral Cortex, 16, 1266–1275. doi:10.1093/cercor/bhj068
  • Keane, A. M. (1999). Cerebral organization of motor programming and verbal processing as a function of degree of hand preference and familial sinistrality. Brain and Cognition, 40, 500–515. doi:10.1006/brcg.1999.1115
  • Keane, A. M. (2001). Motor control of the hands: The effect of familial sinistrality. International Journal of Neuroscience, 110, 25–41. doi:10.3109/00207450108994219
  • Keane, A. M. (2002). Direction of hand preference: The connection with speech and the influence of familial handedness. International Journal of Neuroscience, 112, 1287–1303. doi:10.1080/00207450290158205
  • Keane, A. M. (2008). What aspect of handedness is general motor programming related to? International Journal of Neuroscience, 118, 519–530. doi:10.1080/00207450601074667
  • Keane, A. M. (2012). The general motor programmer: Its specialization for speech perception and movement. World Journal of Neuroscience, 2, 156–158. doi:10.4236/wjns.2012.23024
  • Khedr, E., Hamed, E., Said, A., & Basahi, J. (2002). Handedness and language cerebral lateralization. European Journal of Applied Physiology, 87, 469–473. doi:10.1007/s00421-002-0652-y
  • Kimura, D. (1977). Acquisition of a motor skill after left-hemisphere damage. Brain, 100, 527–542. doi:10.1093/brain/100.3.527
  • Kimura, D. (1982). Left-hemisphere control of oral and brachial movements and their relation to communication. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 298, 135–149. doi:10.1098/rstb.1982.0077
  • Kimura, D., & Archibald, Y. (1974). Motor functions of the left hemisphere. Brain, 97, 337–350. doi:10.1093/brain/97.1.337
  • Laguitton, V., De Graaf, J., Chauvel, P., & Liégeois-Chauvel, C. (2000). Identification reaction times of voiced/voiceless continua: A right-ear advantage for VOT values near the phonetic boundary. Brain and Language, 75, 153–162. doi:10.1006/brln.2000.2350
  • Lashley, K. S. (1937). Functional determinants of cerebral localization. Archives of neurology and psychiatry, 21, 371–387. doi:10.1001/archneurpsyc.1937.02260200143013
  • Lashley, K. S. (1951). The problem of serial order in behavior. In L. A. Jeffress (Ed.), Cerebral mechanisms in behavior (pp. 112–146). New York, NY: Wiley.
  • Liberman, A. M. (1982). On finding that speech is special. American Psychologist, 37, 148–167. doi:10.1037/0003-066X.37.2.148
  • Liberman, A. M., & Mattingly, I. G. (1985). The motor theory of speech perception revised. Cognition, 21, 1–36. doi:10.1016/0010-0277(85)90021-6
  • Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review, 74, 431–461. doi:10.1037/h0020279
  • Liepmann, H. (1980). The left hemisphere and action (A translation from Munchener medizinische Wochenschrift 1905, 48 & 49). In Translations from Liepmann’s essays on apraxia (Research Bulletin) (506, pp. 17–50). Department of Psychology, University of Western Ontario.
  • Lomas, J., & Kimura, D. (1976). Intrahemispheric interaction between speaking and sequential manual activity. Neuropsychologia, 14, 23–33. doi:10.1016/0028-3932(76)90004-X
  • Long, M. A., Katlowitz, K. A., Svirsky, M. A., Clary, R. C., Byun, T. M., Majaj, N., ... Greenlee, J. D. (2016). Functional segregation of cortical regions underlying speech timing and articulation. Neuron, 89, 1187–1193. doi:10.1016/j.neuron.2016.01.032
  • MacKay, D. G. (1987). The organization of perception and action. New York, NY: Springer-Verlag. doi:10.1007/978-1-4612-4754-8
  • Marstaller, L., & Burianova, H. (2014). The multisensory perception of co-speech gestures—A review and meta-analysis of neuroimaging studies. Journal of Neurolinguistics, 30, 69–77. doi:10.1016/j.jneuroling.2014.04.003
  • Mateer, C., & Kimura, D. (1977). Impairment of nonverbal oral movements in aphasia. Brain and Language, 4, 262–276. doi:10.1016/0093-934X(77)90022-0
  • McFarland, K., Ashton, R., & Jeffery, C. K. (1989). Lateralized dual-task performance: The effects of spatial and muscular repositioning. Neuropsychologia, 27, 1267–1276. doi:10.1016/0028-3932(89)90039-0
  • Meador, K., Loring, D., Lee, K., Hughes, M., Lee, G., Nichols, M., & Heilman, K. (1999). Cerebral lateralization: Relationship of language and ideomotor praxis. Neurology, 53, 2028–2031. doi:10.1212/WNL.53.9.2028
  • Moineau, S., Dronkers, N., & Bates, E. (2005). Exploring the processing continuum of single-word comprehension in aphasia. Journal of Speech Language and Hearing Research, 48, 884–896. doi:10.1044/1092-4388(2005/061)
  • Morais, J., & Darwin, C. J. (1974). Ear differences for same-different reaction times to monaurally presented speech. Brain and Language, 1, 383–390. doi:10.1016/0093-934X(74)90015-7
  • Moscovitch, M. (1973). Language and the cerebral hemispheres: Reaction-time studies and their implications for models of cerebral dominance. In P. Pliner, L. Krames, & T. Alloway (Eds.), Communication and affect: Language and thought (pp. 89–126). New York, NY: Academic Press. doi:10.1016/B978-0-12-558250-6.50012-2
  • Ojemann, G. A. (1984). Common cortical and thalamic mechanisms for language and motor functions. American Journal of Physiology, 246, R901–R903.
  • Oldfield, R. (1971). The assessment and analysis of handedness: The Edinburgh Inventory. Neuropsychologia, 9, 97–113. doi:10.1016/0028-3932(71)90067-4
  • Oscar-Berman, M., Zurif, E. B., & Blumstein, S. (1975). Effects of unilateral brain damage on the processing of speech sounds. Brain and Language, 2, 345–355. doi:10.1016/S0093-934X(75)80075-7
  • Perecman, E., & Kellar, L. (1981). The effect of voice and place among aphasic, nonaphasic right-damaged, and normal subjects on a metalinguistic task. Brain and Language, 12, 213–223. doi:10.1016/0093-934X(81)90015-8
  • Perkell, J. (2012). Movement goals and feedback and feedforward control mechanisms in speech production. Journal of Neurolinguistics, 25, 382–407. doi:10.1016/j.jneuroling.2010.02.011
  • Pickering, M. J., & Garrod, S. (2013). An integrated theory of language production and comprehension. Behavioral and Brain Sciences, 36, 329–392. doi:10.1017/S0140525X12001495
  • Platel, H., Price, C., Baron, J., Wise, R., Lambert, J., Frackowiak, R., ... Eustache, F. (1997). The structural components of music perception: A functional anatomical study. Brain, 120, 229–243. doi:10.1093/brain/120.2.229
  • Pulvermuller, F., Huss, M., Kherif, F., Martin, F., Hauk, O., & Shtyrov, Y. (2006). Motor cortex maps articulatory features of speech sounds. Proceedings of the National Academy of Sciences, 103, 7865–7870. doi:10.1073/pnas.0509989103
  • Rastatter, M. P., & Gallaher, A. J. (1982). Reaction-times of normal subjects to monaurally presented verbal and tonal stimuli. Neuropsychologia, 20, 465–473. doi:10.1016/0028-3932(82)90045-8
  • Roy, E., & Square, P. (1985). Common considerations in the study of limb, verbal and oral apraxia. In E. A. Roy (Ed.), Neuropsychological studies of apraxia and related disorders (pp. 111–161). North-Holland: Elsevier Science.
  • Russell, D. G. (1976). Spatial location cues and movement production. In G. E. Stelmach (Ed.), Motor control (pp. 67–85). New York, NY: Academic Press. doi:10.1016/B978-0-12-665950-4.50008-0
  • Schirmer, A. (2004). Timing speech: a review of lesion and neuroimaging findings. Cognitive Brain Research, 21, 269–287. doi:10.1016/j.cogbrainres.2004.04.003
  • Schwartz, J., & Tallal, P. (1980). Rate of acoustic change may underlie hemispheric specialization for speech perception. Science, 207, 1380–1381. doi:10.1126/science.7355297
  • Scott, S. K., & McGettigan, C. (2013). Do temporal processes underlie left hemisphere dominance in speech perception? Brain and Language, 127, 36–45. doi:10.1016/j.bandl.2013.07.006
  • Segalowitz, S. J., & Cohen, H. (1989). Right hemisphere EEG sensitivity to speech. Brain and Language, 37, 220–231. doi:10.1016/0093-934X(89)90016-3
  • Sheehan, E. P., & Smith, H. V. (1986). Cerebral lateralization and handedness and their effects on verbal and spatial reasoning. Neuropsychologia, 24, 531–540. doi:10.1016/0028-3932(86)90097-7
  • Simos, P. G., Molfese, D. L., & Brenden, R. A. (1997). Behavioral and electrophysiological indices of voicing-cue discrimination: Laterality patterns and development. Brain and Language, 57, 122–150. doi:10.1006/brln.1997.1836
  • Studdert-Kennedy, M., & Shankweiler, D. (1970). Hemispheric specialization for speech perception. The Journal of the Acoustical Society of America, 48, 579–594. doi:10.1121/1.1912174
  • Szenkovits, G., Peelle, J., Norris, D., & Davis, M. (2012). Individual differences in premotor and motor recruitment during speech perception. Neuropsychologia, 50, 1380–1392. doi:10.1016/j.neuropsychologia.2012.02.023
  • Tallal, P. (1983). A precise timing mechanism may underlie a common speech perception and production area in the peri-Sylvian cortex of the dominant hemisphere. Behavioral and Brain Sciences, 6, 219–220. doi:10.1017/S0140525X00015636
  • Tallal, P., & Newcombe, F. (1978). Impairment of auditory perception and language comprehension in dysphasia. Brain and Language, 5, 13–24. doi:10.1016/0093-934X(78)90003-2
  • Voyer, D., & Techentin, C. (2009). Dichotic listening with consonant-vowel pairs: The role of place of articulation and stimulus dominance. Journal of Phonetics, 37, 162–172. doi:10.1016/j.wocn.2008.12.001
  • Wallwork, J. F. (1985). Language and linguistics (2nd ed.). Oxford: Heinemann.
  • Watson, N. V., & Kimura, D. (1989). Right-hand superiority for throwing but not for intercepting. Neuropsychologia, 27, 1399–1414. doi:10.1016/0028-3932(89)90133-4
  • Westerhausen, R., Kompus, K., & Hugdahl, K. (2014). Mapping hemispheric symmetries, relative asymmetries, and absolute asymmetries underlying the auditory laterality effect. Neuroimage, 84, 962–970. doi:10.1016/j.neuroimage.2013.09.074
  • Wilson, S., Saygin, A., Sereno, M., & Iacoboni, M. (2004). Listening to speech activates motor areas involved in speech production. Nature Neuroscience, 7, 701–702. doi:10.1038/nn1263
  • Wolff, P. H., Michel, G. F., Ovrut, M., & Drake, C. (1990). Rate and timing precision of motor coordination in developmental dyslexia. Developmental Psychology, 26, 349–359. doi:10.1037/0012-1649.26.3.349