127
Views
4
CrossRef citations to date
0
Altmetric
ORIGINAL INVESTIGATION

The development of the Athens Emotional States Inventory (AESI): collection, validation and automatic processing of emotionally loaded sentences

, &
Pages 312-322 | Received 13 Nov 2014, Accepted 16 Jan 2015, Published online: 23 Mar 2015

References

  • Albornoz EM, Vignolo LD, Martinez CE, Milone DH. 2013. Genetic wrapper approach for automatic diagnosis of speech disorders related to Autism. In: IEEE 14th International Symposium on Computational Intelligence and Informatics (CINTI), pp. 387–392.
  • Asgari M, Shafran I. 2010. Predicting severity of Parkinson's disease from speech. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 5201–5204.
  • Beale R, Peter C. 2008. The role of affect and emotion in HCI. In: Affect and Emotion in Human-Computer Interaction. Berlin: Springer. p 1–11.
  • Bone D, Lee CC, Black MP, Williams ME, Lee S, Levitt P, Narayanan S. 2014. The Psychologist as an Interlocutor in Autism Spectrum Disorder Assessment: insights from a study of spontaneous prosody. J Speech Lang Hear Res 57:1162–1177.
  • Brown TA, Chorpita BF, Barlow DH. 1998. Structural relationships among dimensions of the DSM-IV anxiety and mood disorders and dimensions of negative affect, positive affect, and autonomic arousal. J Abnorm Psychol 107(2):179–192.
  • Burkhardt F, Paeschke A, Rolfes M, Sendlmeier W, Weiss B. 2005. A database of German emotional speech. In: Proceedings of Interspeech, pp. 1517–1520.
  • Busso C, Bulut M, Lee C, Kazemzadeh A, Mower E, Kim S, et al. 2008. IEMOCAP: Interactive emotional dyadic motion capture database. J Lang Resour Eval 42(1):335–359.
  • Chiţ u AG, van Vulpen M, Takapoui P, Rothkrantz LJ. 2008. Building a Dutch multimodal corpus for emotion recognition. In: Proceedings of Language Resources and Evaluation Conference (LREC). Workshop on Corpora for Research on Emotion and Affect, pp. 53–56.
  • Cohen AS, Elvevåg B. 2014. Automated computerized analysis of speech in psychiatric disorders. Curr Opin Psychiatry, 27(3):203–209.
  • Cootes TF, Edwards GJ, Taylor CJ. 2001. Active appearance models. IEEE Trans Pattern Anal Mach Intell 23(1):681–685.
  • Cullen C, Vaughan B, Kousidis S, Wang Y, McDonnell C, Campbell D. 2006. Generation of high quality audio natural emotional speech corpus using task based mood induction. In: Proceedings of International Conference on Multidisciplinary Information Sciences and Technologies Extremadura.
  • De Ruyter B, Saini P, Markopoulos P, Van Breemen A. 2005. Assessing the effects of building social intelligence in a robotic interface for the home. Interact Comput 17(1): 522–541.
  • Dimitriadis D, Maragos P, Potamianos A. 2005. Robust AM-FM features for speech recognition. IEEE Sig Proc Lett 12(1): 621–624.
  • Dhall A, Goecke R, Lucey S, Gedeon T. 2012. Collecting large, richly annotated facial-expression databases from movies. IEEE Multimedia 19(3):34–41.
  • Douglas-Cowie E, Campbell N, Cowie R, Roach P. 2003. Emotional speech: towards a new generation of databases. Speech Commun 40(1):33–60.
  • Duda RO, Hart PE, Stork DG. 2000. Pattern classification. 2nd ed. Oxford: John Wiley & Sons.
  • El Ayadi M, Kamel MS, Karray F. 2011. Survey on speech emotion recognition: features, classification schemes, and databases. Patt Recog 44(1):572–587.
  • Foster ME. 2007. Enhancing human-computer interaction with embodied conversational agents. In: Stephanidis C, editor. Proceedings of the Fourth International Conference on Universal Access in Human-Computer Interaction (UAHCI). Ambient Interaction. Berlin: Springer, pp. 828–837.
  • Fredrickson BL. 2000. Cultivating positive emotions to optimize health and well-being. Prevention & Treatment 3, Article 0001a.
  • Grossman RB, Tager-Flusberg H. 2008. Reading faces for information about words and emotions in adolescents with autism. Res Autism Spectr Disord 2(1):681–695.
  • Hobson RP, Ouston J, Lee A. 1988. What's in a face? The case of autism. Br J Psychol 79(4):441–453.
  • Hudlicka E. 2003. To feel or not to feel: the role of affect in human-computer interaction. Int J Hum Comput Stud 59(1):1–32.
  • Joachims T, Schölkopf B, Burges C, Smola A. 1999. Making large-scale SVM learning practical. Advances in kernel methods – support vector learning. Boston, MA: MIT Press.
  • Kanade T, Cohn JF, Tian Y. 2000. Comprehensive database for facial expression analysis. In: Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53.
  • Krebs CJ. 1999. Ecological methodology. 2nd ed. Menlo Park, CA: Addison-Wesley Educational Publishers.
  • Lara DR, Pinto O, Akiskal K, Akiskal HS. 2006. Toward an integrative model of the spectrum of mood, behavioral and personality disorders based on fear and anger traits: I. Clinical implications. J Affect Disord 94(1):67–87.
  • Lazaridis A, Bourna V, Fakotakis N. 2010. Comparative evaluation of phone duration models for Greek emotional speech. J Comput Sci 6(1):341–349.
  • Lee CC, Mower E, Busso C, Lee S, Narayanan S. 2011. Emotion recognition using a hierarchical binary decision tree approach. Speech Commun 53(1):1162–1171.
  • Lee CM, Narayanan SS. 2005. Toward detecting emotions in spoken dialogs. IEEE Trans Speech Audio Proc 13(1):293–303.
  • Leijdekkers P, Gay V, Wong F. 2013. CaptureMyEmotion: a mobile app to improve emotion learning for autistic children using sensors. In: Proceedings of IEEE International Symposium on Computer-Based Medical Systems (CBMS), pp. 381–384.
  • Leite I, Henriques R, Martinho C, Paiva A. 2013. Sensors in the wild: exploring electrodermal activity in child-robot interaction. In: Proceedings of ACM/IEEE International Conference on Human-Robot Interaction, pp. 41–48.
  • Leppänen JM. 2006. Emotional information processing in mood disorders: a review of behavioral and neuroimaging findings, Curr Opin Psychiatry 19(1):34–39.
  • Low LS, Maddage MC, Lech M, Sheeber LB, Allen NB. 2011. Detection of clinical depression in adolescents’ speech during family interactions. IEEE Trans Biomed Eng 58(3):574–586.
  • Martin JC, Caridakis G, Devillers L, Karpouzis K, Abrilian S. 2009. Manual annotation and automatic image processing of multimodal emotional behaviors: validating the annotation of TV interviews. Pers Ubiquit Comput 13(1):69–76.
  • McClure EB, Pope K, Hoberman AJ, Pine DS, Leibenluft E. 2003. Facial expression recognition in adolescents with mood and anxiety disorders. Am J Psychiatry 160(6):1172–1174.
  • McIntosh DN, Reichmann-Decker A, Winkielman P, Wilbarger JL. 2006. When the social mirror breaks: deficits in automatic, but not voluntary, mimicry of emotional facial expressions in autism. Dev Sci 9(1):295–302.
  • McKeown G, Valstar M, Cowie R, Pantic M, Schroder M. 2012. The semaine database: annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans Affect Comput 3(1):5–17.
  • Mitra U, Emken BA, Lee S, Li M, Rozgic V, Thatte G, et al. 2012. KNOWME: a case study in wireless body area sensor network design. IEEE Commun Mag 50(1):116–125.
  • Moore E, Clements MA, Peifer JW, Weisser L. 2008. Critical analysis of the impact of glottal features in the classification of clinical depression in speech. IEEE Trans Biomed Eng 55(1):96–107.
  • Mundt JC, Snyder PJ, Cannizzaro MS, Chappie K, Geralts DS. 2007. Voice acoustic measures of depression severity and treatment response collected via interactive voice response (IVR) technology. J Neurolinguist 20(1):50–64.
  • Nijholt A. 2003. Disappearing computers, social actors and embodied agents. In: Proceedings of IEEE International Conference on Cyberworlds, pp. 128–134.
  • Nwe TL, Foo SW, De Silva LC. 2003. Speech emotion recognition using hidden Markov models. Speech Commun 41(1):603–623.
  • Picard RW. 2009. Future affective technology for autism and emotion communication. Phil Trans R Soc 364(1):3575–3584.
  • Sanchez MH, Tür G, Ferrer L, Hakkani-Tür D. 2010. Domain adaptation and compensation for emotion detection. In: Proceedings of Interspeech, pp. 2874–2877.
  • Schuller B, Reiter S, Rigoll G. 2006. Evolutionary feature generation in speech emotion recognition. In: Proceedings of International Conference on Multimedia and Expo (ICME), pp. 5–8.
  • Schuller B, Rigoll G. 2006. Timing levels in segment-based speech emotion recognition. In: Proceedings of Interspeech, pp. 1818–1821.
  • Schuller B, Steidl S, Batliner A. 2009. The INTERSPEECH 2009 Emotion Challenge. In: Proceedings of Interspeech, pp. 312–315.
  • Schuller B, Steidl S, Batliner A, Vinciarelli A, Scherer K, Ringeval F, et al. 2013. The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism. In: Proceedings of Interspeech.
  • Skelley J, Fischer R, Sarma A, Heisele B. 2006. Recognizing expressions in a new database containing played and natural expressions. In: Proceedings of IEEE International Conference on Pattern Recognition (ICPR), pp. 1220–1225.
  • Soleymani M, Caro MN, Schmidt EM, Sha CY, Yang YH. 2013. 1000 songs for emotional analysis of music. Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia (CrowdMM), pp. 1–6.
  • van Santen JPH, Prud'hommeaux E, Black LM, Mitchell M. 2010. Computational prosodic markers for autism. Autism 14(3):215–236.
  • Ververidis D, Kotropoulos C. 2003. A state of the art review on emotional speech databases. In: Proceedings of Richmedia Conference, pp. 109–119.
  • Ververidis D, Kotropoulos C. 2006. Emotional speech recognition: Resources, features, and methods. Speech Commun 48(1):1162–1181.
  • Ververidis D, Kotsia I, Kotropoulos C, Pitas I. 2008. Multi-modal emotion-related data collection within a virtual earthquake emulator. In: Proceedings of Language Resources and Evaluation Conference (LREC), Morocco.
  • Vlasenko B, Schuller B, Wendemuth A, Rigoll G. 2007. Combining Frame and Turn-Level Information for Robust Recognition of Emotions within Speech. In: Proceedings of Interspeech, pp. 2249–2252.
  • Watson D, Clark LA, Carey G. 1988. Positive and negative affectivity and their relation to anxiety and depressive disorders. J Abnorm Psychol 97(3):346–353.
  • Wöllmer M, Schuller B, Eyben F, Rigoll G. 2010. Combining long short-term memory and dynamic bayesian networks for incremental emotion-sensitive artificial listening. IEEE J Selected Topics Sig Proc 4(1):867–881.
  • Wu S, Falk T, Chan W. 2011. Automatic speech emotion recognition using modulation spectral features. Speech Commun 53(1):768–785.
  • Yirmiya N, Sigman MD, Kasari C, Mundy P. 1992. Empathy and cognition in high-functioning autism. Child Dev 63(1):150–160.
  • Young SJ, Evermann G, Gales M, Hain T, Kershaw D, Liu X, et al. 2006. The HTK Book. Cambridge, England: Entropic Cambridge Research Laboratory.
  • Zhou G, Hansen J, Kaiser J. 1998. Classification of speech under stress based on features derived from the nonlinear Teager energy operator. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 549–552.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.