65
Views
75
CrossRef citations to date
0
Altmetric
Studies in Phonetics and Phonology

An experimental study of the durational characteristics of the voice during the expression of emotion

&
Pages 85-90 | Published online: 02 Jun 2009

Keep up to date with the latest research on this topic with citation updates for this article.

Read on this site (7)

ShariR. Baum & MarcD. Pell. (1997) Production of affective and linguistic prosody by brain-damaged patients. Aphasiology 11:2, pages 177-198.
Read now
M.a José Mallo Carrera & Alfonso Jiménez Fernández. (1988) El reconocimiento de emociones a través de la voz. Studies in Psychology 9:33-34, pages 31-52.
Read now
MichaelT. Motley. (1974) Acoustic correlates of lies. Western Speech 38:2, pages 81-87.
Read now
WilliamG. Franklin. (1972) An experimental study of the acoustic characteristics of simulated emotions. Southern Speech Communication Journal 38:2, pages 168-180.
Read now
GeorgeB. Strother. (1949) The Role of Muscle Action in Interpretative Reading. The Journal of General Psychology 41:1, pages 3-20.
Read now
Grant Fairbanks. (1942) Toward an experimental aesthetics of the theater. Quarterly Journal of Speech 28:1, pages 50-55.
Read now

Articles from other publishers (68)

Kamaldeep Kaur & Parminder Singh. (2022) Impact of Feature Extraction and Feature Selection Algorithms on Punjabi Speech Emotion Recognition Using Convolutional Neural Network. ACM Transactions on Asian and Low-Resource Language Information Processing 21:5, pages 1-23.
Crossref
Jasenka Rakas, Matthew Alvarado, Kezhi He, Drew Kim & Della Qu. (2022) Analysis of Controller-Pilot Communication Messages with Natural Language Processing. Analysis of Controller-Pilot Communication Messages with Natural Language Processing.
Jutono Gondohanindijo, Edy Noersasongko, Pujiono, Muljono, Ahmad Zainul Fanani, Affandy & Ruri Suko Basuki. (2019) Comparison Method in Indonesian Emotion Speech Classification. Comparison Method in Indonesian Emotion Speech Classification.
Turgut Özseven. (2019) A novel feature selection method for speech emotion recognition. Applied Acoustics 146, pages 320-326.
Crossref
Pravin M. Ghate & Suresh D. Shirbahadurkar. 2019. Information and Communication Technology for Competitive Strategies. Information and Communication Technology for Competitive Strategies 615 624 .
Ming-Chuan Chiu & Li-Wei Ko. (2016) Develop a personalized intelligent music selection system based on heart rate variability and machine learning. Multimedia Tools and Applications 76:14, pages 15607-15639.
Crossref
Joanna Śmiecińska. (2017) The perception and interpretation of contrastive focus by Polish children and adults. Poznan Studies in Contemporary Linguistics 53:3.
Crossref
Joanna Śmiecińska. (2016) Emotional and linguistic prosody development in Polish children: Three different paths. Poznan Studies in Contemporary Linguistics 52:3.
Crossref
P. Gangamohan, Sudarsana Reddy Kadiri & B. Yegnanarayana. 2016. Toward Robotic Socially Believable Behaving Systems - Volume I. Toward Robotic Socially Believable Behaving Systems - Volume I 205 238 .
Łukasz Stolarski. (2015) Pitch Patterns in Vocal Expression of 'Happiness' and 'Sadness' in the Reading Aloud of Prose on the Basis of Selected Audiobooks. Research in Language 13:2, pages 140-161.
Crossref
S. Ananthi & P. Dhanalakshmi. 2015. Computational Intelligence in Data Mining - Volume 3. Computational Intelligence in Data Mining - Volume 3 65 73 .
Eszter Tisljár-Szabó & Csaba Pléh. (2014) Ascribing emotions depending on pause length in native and foreign language speech. Speech Communication 56, pages 35-48.
Crossref
Björn W. Schuller & Anton M. Batliner. 2013. Computational Paralinguistics. Computational Paralinguistics 107 158 .
D. Govind & S. R. Mahadeva Prasanna. (2012) Expressive speech synthesis: a review. International Journal of Speech Technology 16:2, pages 237-260.
Crossref
László Puskás, János László & Éva Fülöp. (2012) Lear király lelkiállapot-változása első és utolsó monológjának szövegbeli és akusztikai jegyei alapján. Pszichológia 32:2, pages 91-118.
Crossref
Diana P. Szameitat, Kai Alter, André J. Szameitat, Dirk Wildgruber, Annette Sterr & Chris J. Darwin. (2009) Acoustic profiles of distinct emotional expressions in laughter. The Journal of the Acoustical Society of America 126:1, pages 354-366.
Crossref
Jia Rong, Gang Li & Yi-Ping Phoebe Chen. (2009) Acoustic feature selection for automatic emotion recognition from speech. Information Processing & Management 45:3, pages 315-328.
Crossref
Mumtaz Begum, Raja N. Ainon, Zuraidah M. Don & Gerry Knowles. (2007) Adding an Emotions Filter to Malay Text-to-Speech System. Adding an Emotions Filter to Malay Text-to-Speech System.
Donn Morrison & Liyanage C. De Silva. (2007) Voting ensembles for spoken affect classification. Journal of Network and Computer Applications 30:4, pages 1356-1365.
Crossref
Donn Morrison, Ruili Wang & Liyanage C. De Silva. (2007) Ensemble methods for spoken emotion recognition in call-centres. Speech Communication 49:2, pages 98-112.
Crossref
Vladimir Hozjan & Zdravko Kačič. (2006) A rule-based emotion-dependent feature extraction method for emotion analysis from speech. The Journal of the Acoustical Society of America 119:5, pages 3109-3120.
Crossref
Matti Airas & Paavo Alku. (2006) Emotions in Vowel Segments of Continuous Speech: Analysis of the Glottal Flow Using the Normalised Amplitude Quotient. Phonetica 63:1, pages 26-46.
Crossref
Shahrukh K. Taseer. (2005) Speaker Identification for Speakers with Deliberately Disguised Voices using Glottal Pulse Information. Speaker Identification for Speakers with Deliberately Disguised Voices using Glottal Pulse Information.
Kala Lakshminarayanan, Dorit Ben Shalom, Virginie van Wassenhove, Diana Orbelo, John Houde & David Poeppel. (2003) The effect of spectral manipulations on the identification of affective and linguistic prosody. Brain and Language 84:2, pages 250-263.
Crossref
Patrik N. Juslin & Petri Laukka. (2003) Communication of emotions in vocal expression and music performance: Different channels, same code?. Psychological Bulletin 129:5, pages 770-814.
Crossref
Marc D. Pell. (2001) Influence of emotion and focus location on prosody in matched statements and questions. The Journal of the Acoustical Society of America 109:4, pages 1668-1680.
Crossref
R. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz & J.G. Taylor. (2001) Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine 18:1, pages 32-80.
Crossref
Patrik N. Juslin & Petri Laukka. (2001) Impact of intended emotion intensity on cue utilization and decoding accuracy in vocal expression of emotion.. Emotion 1:4, pages 381-412.
Crossref
Peter Greasley, Carol Sherrard & Mitch Waterman. (2016) Emotion in Language and Speech: Methodological Issues in Naturalistic Approaches. Language and Speech 43:4, pages 355-375.
Crossref
Marc D. Pell. (1999) Fundamental Frequency Encoding of Linguistic and Emotional Prosody by Right Hemisphere-Damaged Speakers. Brain and Language 69:2, pages 161-192.
Crossref
Marc D. Pell. (1999) The Temporal Organization of Affective and Non-Affective Speech in Patients with Right-Hemisphere Infarcts. Cortex 35:4, pages 455-477.
Crossref
Martine Laframboise, Peter J. Snyder & Henri Cohen. (1997) Cerebral Hemispheric Control of Speech during the Intracarotid Sodium Amytal Procedure: An Acoustic Exploration. Brain and Language 60:2, pages 243-254.
Crossref
Patricia Rockwell, David B. Buller & Judee K. Burgoon. (2008) Measurement of deceptive voices: Comparing acoustic and perceptual data. Applied Psycholinguistics 18:4, pages 471-484.
Crossref
Marc D. Pell & Shari R. Baum. (1997) Unilateral Brain Damage, Prosodic Comprehension Deficits, and the Acoustic Cues to Prosody. Brain and Language 57:2, pages 195-214.
Crossref
Elliott D. Ross, Robin D. Thompson & Joseph Yenkosky. (1997) Lateralization of Affective Prosody in Brain and the Callosal Integration of Hemispheric Language Functions. Brain and Language 56:1, pages 27-54.
Crossref
Robert Ruiz, Emmanuelle Absil, Bernard Harmegnies, Claude Legros & Dolors Poch. (1996) Time- and spectrum-related variabilities in stressed speech under laboratory and real conditions. Speech Communication 20:1-2, pages 111-129.
Crossref
Chris Baber & Jan Noyes. (2016) Automatic Speech Recognition in Adverse Environments. Human Factors: The Journal of the Human Factors and Ergonomics Society 38:1, pages 142-155.
Crossref
Terri L. Bonebright, Jeri L. Thompson & Daniel W. Leger. (1996) Gender stereotypes in the expression and perception of vocal affect. Sex Roles 34:5-6, pages 429-445.
Crossref
Iain R. Murray & John L. Arnott. (1995) Implementation and testing of a system for producing emotion-by-rule in synthetic speech. Speech Communication 16:4, pages 369-390.
Crossref
Gerald W. McRoberts, Michael Studdert-Kennedy & Donald P. Shankweiler. (1995) The role of fundamental frequency in signaling linguistic stress and affect: Evidence for a dissociation. Perception & Psychophysics 57:2, pages 159-174.
Crossref
Diana Van Lancker & John J. sidtis. (1992) The Identification of Affective-Prosodic Stimuli by Left- and Right-Hemisphere-Damaged Subjects. Journal of Speech, Language, and Hearing Research 35:5, pages 963-970.
Crossref
Klaus R. Scherer, Rainer Banse, Harald G. Wallbott & Thomas Goldbeck. (1991) Vocal cues in emotion encoding and decoding. Motivation and Emotion 15:2, pages 123-148.
Crossref
Harry HollienHarry Hollien. 1990. The Acoustics of Crime. The Acoustics of Crime 255 271 .
KLAUS R. SCHERER. 1989. The Measurement of Emotions. The Measurement of Emotions 233 259 .
Elliott D. Ross, Jerold A. Edmondson, G.Burton Seibert & Richard W. Homan. (1988) Acoustic analysis of affective prosody during right-sided Wada Test: A within-subjects verification of the right Hemisphere's role in language. Brain and Language 33:1, pages 128-145.
Crossref
Jerold A. Edmondson, Jin-Lieh Chan, G. Burton Seibert & Elliott D. Ross. (1987) The effect of right-brain damage on acoustical measures of affective prosody in Taiwanese patients. Journal of Phonetics 15:3, pages 219-233.
Crossref
Harry HollienLaura GeisonJames W. HicksJrJr. (1987) Voice Stress Evaluators and Lie Detection. Journal of Forensic Sciences 32:2, pages 11143J.
Crossref
Elliott D. Ross, Jerold A. Edmondson & G. Burton Seibert. (1986) The effect of affect on various acoustic measures of prosody in tone and non-tone languages: A comparison based on computer analysis of voice. Journal of Phonetics 14:2, pages 283-302.
Crossref
Arlene S. Walker-Andrews & Wendy Grolnick. (1983) Discrimination of vocal expressions by young infants. Infant Behavior and Development 6:4, pages 491-498.
Crossref
Yoshiyuki Horii. (1983) Some acoustic characteristics of oral reading by ten- to twelve-year-old children. Journal of Communication Disorders 16:4, pages 257-267.
Crossref
Maciej Pakosz. (1983) Attitudinal judgments in intonation: Some evidence for a theory. Journal of Psycholinguistic Research 12:3, pages 311-326.
Crossref
Danielle Duez. (2016) Silent and Non-Silent Pauses in Three Speech Styles. Language and Speech 25:1, pages 11-28.
Crossref
P. Winkler. 1981. Methoden der Analyse von Face-to-Face-Situationen. Methoden der Analyse von Face-to-Face-Situationen 9 46 .
Harry Hollien. (1980) VOCAL INDICATORS OF PSYCHOLOGICAL STRESS. Annals of the New York Academy of Sciences 347:1 Forensic Psyc, pages 47-72.
Crossref
Edward J. Clemmer, Daniel C. O'Connell & Wayne Loui. (2016) Rhetorical Pauses in Oral Reading. Language and Speech 22:4, pages 397-405.
Crossref
Klaus R. Scherer. 1979. Emotions in Personality and Psychopathology. Emotions in Personality and Psychopathology 493 529 .
C. Chevrie-Muller, N. Seguier, A. Spira & M. Dordain. (2016) Recognition of Psychiatric Disorders From Voice Quality. Language and Speech 21:1, pages 87-111.
Crossref
J. Donald Ragsdale. (2016) Relationships Between Hesitation Phenomena, Anxiety, and Self-Control in a Normal Communication Situation. Language and Speech 19:3, pages 257-265.
Crossref
Louis-Jean Boe & Hippolyte Rakotofiringa. (2016) A Statistical Analysis of Laryngeal Frequency : Its Relationship To Intensity Level and Duration. Language and Speech 18:1, pages 1-13.
Crossref
Angela B. Steer. (2016) Sex Differences, Extraversion and Neuroticism in Relation to Speech Rate During the Expression of Emotion. Language and Speech 17:1, pages 80-86.
Crossref
Stanley Feldstein. 1972. Studies in Dyadic Communication. Studies in Dyadic Communication 91 113 .
Klaus R. Scherer, Judy Koivumaki & Robert Rosenthal. (1972) Minimal cues in the vocal communication of affect: Judging emotions from content-masked speech. Journal of Psycholinguistic Research 1:3, pages 269-285.
Crossref
Richard Luchsinger & Gottfried E. ArnoldRichard Luchsinger & Gottfried E. Arnold. 1970. Handbuch der Stimm- und Sprachheilkunde. Handbuch der Stimm- und Sprachheilkunde 87 106 .
Robyn Mason Dawes & Ernest Kramer. (2016) A Proximity Analysis of Vocally Expressed Emotion. Perceptual and Motor Skills 22:2, pages 571-574.
Crossref
Frederick Williams & Barbara Sundene. (1965) Dimensions of recognition: Visual vs. Vocal expression of emotion. AV communication review 13:1, pages 44-52.
Crossref
Elizabeth Uldall. (2016) Attitudinal Meanings Conveyed by Intonation Contours. Language and Speech 3:4, pages 223-234.
Crossref
Richard Luchsinger & Gottfried E. ArnoldGottfried E. Arnold. 1959. Lehrbuch der Stimm- und Sprachheilkunde. Lehrbuch der Stimm- und Sprachheilkunde 253 707 .
Wilbert Pronovost. (1942) Research Contributions to Voice Improvement. Journal of Speech Disorders 7:4, pages 313-318.
Crossref

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.