705
Views
22
CrossRef citations to date
0
Altmetric
Articles

Is this my voice or yours? The role of emotion and acoustic quality in self-other voice discrimination in schizophrenia

, , &
Pages 335-353 | Received 02 Sep 2015, Accepted 29 Jun 2016, Published online: 25 Jul 2016
 

ABSTRACT

Introduction: Impairments in self-other voice discrimination have been consistently reported in schizophrenia, and associated with the severity of auditory verbal hallucinations (AVHs). This study probed the interactions between voice identity, voice acoustic quality, and semantic valence in a self-other voice discrimination task in schizophrenia patients compared with healthy subjects. The relationship between voice identity discrimination and AVH severity was also explored.

Methods: Seventeen chronic schizophrenia patients and 19 healthy controls were asked to read aloud a list of adjectives characterised by emotional or neutral content. Participants’ voice was recorded in the first session. In the behavioural task, 840 spoken words differing in identity (self/non-self), acoustic quality (undistorted/distorted), and semantic valence (negative/positive/neutral) were presented. Participants indicated if the words were spoken in their own voice, another person’s voice, or were unsure.

Results: Patients were less accurate than controls in the recognition of self-generated speech with negative content only. Impaired recognition of negative self-generated speech was associated with AVH severity (“voices conversing”).

Conclusions: These results suggest that abnormalities in higher order processes (evaluation of the salience of a speech stimulus) modulate impaired self-other voice discrimination in schizophrenia. Abnormal processing of negative self-generated speech may play a role in the experience of AVH.

Acknowledgments

We are grateful to all the participants of this study for their contribution to science.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. The term “semantic valence” is used throughout the manuscript to indicate neutral vs. emotional semantic content, as opposed to valence associated with prosodic content.

2. In the study of Allen et al. (Citation2004), the tendency for increased misattribution errors in patients with hallucinations when the words had negative content (e.g., “corrupt”, “contaminated”, “unfortunate”) was not statistically significant. As the number of trials per condition was relatively low (24 undistorted SGS, 24 distorted SGS, 24 undistorted non-self speech, 24 distorted non-self speech, with approximately only 8 negative, 8 positive, and 8 neutral words in each condition), this may have resulted in a lack of power reflected in the lack of statistically significant differences related to the negative speech condition.

3. Lowering the pitch, instead of increasing it, was justified by previous evidence indicating that it yields more prominent neural responses to pitch feedback perturbations (Chen et al., Citation2012; Liu, Meshman, Behroozmand, & Larson, Citation2011).

4. In a complementary analysis, we tested the effects of education and general IQ on the behavioural data, by computing a separate ANOVA with these variables as covariates. The effects of education (p = .921) and general IQ (p = .723) were not significant.

5. In order to rule out whether there were acoustic differences between negative SGS speech correctly recognized as “self” and negative SGS misidentified as “other” by schizophrenia patients, we also performed an acoustic analysis of these speech stimuli, based on the individual performance of each patient. The paired samples t-tests did not reveal differences in the acoustic properties of negative SGS associated with correct responses and with errors (duration – p = .620; mean F0 – p = .710; mean intensity – p = .337).

6. In order to rule out the effects of the number of “unsure” responses, we ran a repeated-measures ANOVA on the proportion of correct responses, adjusted for the number of “self” and “other” responses in each semantic valence and acoustic quality condition (i.e., the number of “unsure” responses was subtracted from the total number of available responses). A significant group by semantic valence interaction was observed after controlling for the effects of voice acoustic properties (F(2, 60) = 4.305, p = .018, partial η2 = .125). We followed up this interaction by running separate ANOVAs for each semantic valence type, keeping identity and acoustic quality as within-subject factors. A significant group by identity by acoustic quality interaction was observed when analysing negative speech (F(1, 33) = 4.736, p = .037, partial η2 = .126), confirming a specific impairment in the recognition of the identity of negative SGS in schizophrenia.

7. We note, though, that we did not control for formant dispersion, contrary to previous studies (e.g., Chhabra, Badcock, Maybery, & Leung, Citation2012). We focus our acoustic analysis on F0, as this measure has been shown to play the most critical role in self-voice recognition and in the discrimination between familiar and unfamiliar voices (e.g., Baumann & Belin, Citation2010; Latinus & Belin, Citation2012; Latinus, McAleer, Bestelmeyer, & Belin, Citation2013; Xu, Homae, Hashimoto, & Hagiwara, Citation2013).

Additional information

Funding

This work was supported by Grants IF/00334/2012 and PTDC/PSI-PCL/116626/2010 awarded to A.P.P. The two grants were funded by Fundação para a Ciência e a Tecnologia (FCT, Portugal) and, in addition, by FEDER (Fundo Europeu de Desenvolvimento Regional) through the European programs QREN and COMPETE.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.