450
Views
37
CrossRef citations to date
0
Altmetric
Original Articles

‘Mister D.J., Cheer Me Up!’: Musical and Textual Features for Automatic Mood Classification

, , &
Pages 13-34 | Published online: 30 Apr 2010

References

  • Andric , A. and Haus , G. 2006 . Automatic playlist generation based on tracking user's listening habits . Multimedia Tools and Applications , 29 ( 2 ) : 127 – 151 .
  • Ang , J. , Dhillon , R. , Shriberg , E. and Stolcke , A. . Prosody-based automatic detection of annoyance and frustration in human–computer dialog . Proceedings International Conference on Spoken Language Processing (ICSLP) . Denver, CO. pp. 2037 – 2040 .
  • Arunachalam , S. , Gould , D. , Anderson , E. , Byrd , D. and Narayanan , S. S. . Politeness and frustration language in child–machine interactions . Proceedings EUROSPEECH . Aalborg, Denmark. pp. 2675 – 2678 .
  • Bainbridge , D. , Cunningham , S. J. and Downie , J. S. . Analysis of queries to a wizard-of-oz MIR system: Challenging assumptions about what people really want . Proceedings International Conference on Music Information Retrieval . Baltimore, MD. pp. 221 – 222 .
  • Balkwill , L.-L. and Thompson , W. F. 1999 . A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues . Music Perception , 17 ( 1 ) : 43 – 64 .
  • Bartsch , M. A. and Wakefield , G. H. . To catch a chorus: Using chroma-based representations for audio thumbnailing . Proceedings IEEE Workshop on Applications of Signal Processing to Audio and Acoustics 2001 . New Paltz, NY. pp. 15 – 18 .
  • Batliner , A. , Steidl , S. , Schuller , B. , Seppi , D. , Laskowski , K. , Vogt , T. , Devillers , L. , Vidrascu , L. , Amir , N. , Kessous , L. and Aharonson , V. . Combining efforts for improving automatic classification of emotional user states . Proceedings IS-LTC . Ljubliana. pp. 240 – 245 .
  • Berenzweig , A. L. and Ellis , D. P.W. . Locating singing voice segments within musical signals . Proceedings International Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) . Mohonk, NY. pp. 119 – 123 .
  • Breese , J. and Ball , G. 1998 . Modeling emotional state and personality for conversational agents (Technical report MS-TR-98-41). Microsoft
  • Carletta , J. 1996 . Assessing agreement on classification tasks: The kappa statistic . Computational Linguistics , 22 ( 2 ) : 249 – 254 .
  • Chai , W. and Vercoe , B. . Detection of key change in classical piano music . Proceedings 6th International Conference on Music Information Retrieval (ISMIR) . London, UK. pp. 468 – 474 .
  • Cheng , H.-T. , Yang , Y.-H. , Lin , Y.-C. , Liao , I.-B. and Chen , H. H. . Automatic chord recognition for music classification and retrieval . Proceedings International Conference on Multimedia and Expo (ICME) . Hannover, Germany. pp. 1505 – 1508 . Baltimore, MD : IEEE .
  • Chew , E. . The spiral array: An algorithm for determining key boundaries . Proceedings 2nd International Conference on Music and Artificial Intelligence (ICMAI) . September , Edinburgh, Scotland. pp. 18 – 31 . Berlin : Springer .
  • Chuang , Z.-J. and Wu , C.-H. . Emotion recognition using acoustic features and textual content . Proceedings International Conference on Multimedia and Expo (ICME) . Taipei, Taiwan. pp. 53 – 56 .
  • Cowie , R. , Douglas-Cowie , E. , Apolloni , B. , Taylor , J. , Romano , A. and Fellenz , W. 1999 . What a neural net needs to know about emotion words . Journal of Computational Intelligence and Applications , : 109 – 114 .
  • de Rosis , F. , Batliner , A. , Novielli , N. and Steidl , S. . ‘You are Sooo Cool, Valentina!’ Recognizing social attitude in speech-based dialogues with an ECA . Proceedings Affective Computing and Intelligent Interaction . Edited by: Paiva , A. , Prada, , R. and Picard , R. W. pp. 179 – 190 . Berlin-Heidelberg : Springer .
  • Devillers , L. , Lamel , L. and Vasilescu , I. . Emotion detection in task-oriented spoken dialogs . Proceedings International Conference on Multimedia and Expo (ICME) . Baltimore, MD. IEEE .
  • Draper , B. , Kaito , C. and Bins , J. . Iterative relief . Proceedings Computer Vision and Pattern Recognition Workshop . Madison, WI. Vol. 6 , pp. 62 – 67 . Baltimore, MD : IEEE Computer Society .
  • Duan , Z. , Lu , L. and Zhang , C. . Audio tonality mode classification without tonic annotations . Proceedings International Conference on Multimedia and Expo (ICME) . Hannover, Germany. pp. 1361 – 1364 . Baltimore, MD : IEEE .
  • Dupuis , K. and Pichora-Fuller , K. . Use of lexical and affective prosodic cues to emotion by younger and older adults . Proceedings INTERSPEECH . Antwerp, Belgium. pp. 2237 – 2240 .
  • Elliott , C. 1992 . The affective reasoner: A process model of emotions in a multi-agent system , Illinois, , USA : NorthWestern University . (PhD thesis)
  • Eyben , F. , Schuller , B. , Reiter , S. and Rigoll , G. . Wearable assistance of the ballroom-dance hobbyist—holistic rhythm analysis and dance-style classification . Proceedings International Conference on Multimedia and Expo (ICME) . Beijing, China. pp. 92 – 95 . Baltimore, MD : IEEE .
  • Farnsworth , P. R. 1954 . A study of the hevner adjective list . The Journal of Aesthetics and Art Criticism , 13 ( 1 ) : 97 – 103 .
  • Farnsworth , P. R. 1958 . The Social Psychology of Music , New York : Dryden Press .
  • Feng , Y. , Zhuang , Y. and Pan , Y. . Popular music retrieval by detecting mood . Proceedings 26th International SIGIR Conference on Research and Development in Information Retrieval . Toronto, Canada. pp. 375 – 376 . New York : ACM .
  • Fujishima , T. . Realtime chord recognition of musical sound: A system using common lisp music . Proceedings International Computer Music Conference . San Francisco. pp. 464 – 467 . Beijing, China : International Computer Music Association .
  • Gabrielsson , A. 2002 . Emotion perceived and emotion felt: Same or different? . Musicae Scientiae , : 123 – 147 . Special issue 2001–2002
  • Gabrielsson , A. and Lindström , E. 2001 . “ The influence of musical structure on emotional expression ” . In Music and Emotion: Theory and Research , 223 – 248 . Oxford : Oxford University Press .
  • Gillick , L. and Cox , S. J. . Some statistical issues in the comparison of speech recognition algorithms . Proceedings International Conference on Audio Speech and Signal Processing (ICASSP) . Glasgow, UK. Vol. I , pp. 23 – 26 . Baltimore, MD : IEEE .
  • Goertzel , B. , Silverman , K. , Hartley , C. , Bugaj , S. and Ross , M. . The baby webmind project . Proceedings Annual Conference of The Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB) . Birmingham, UK.
  • Gomez , E. . Key estimation from polyphonic audio . Proceedings 1st Annual Music Information Retrieval Evaluation eXchange (MIREX 2005) . London, UK.
  • Goto , M. 2006 . A chorus section detection method for musical audio signals and its application to a music listening station . IEEE Transactions on Audio, Speech and Language Processing , 14 ( 5 ) : 1783 – 1794 .
  • Goto , M. and Muraoka , Y. . An audio-based real-time beat tracking system and its applications . Proceedings International Computer Music Confernce . San Francisco, CA. pp. 17 – 20 . San Francisco, CA : ICMA .
  • Hevner , K. 1936 . Experimental studies of the elements of expression in music . American Journal of Psychology , 48 : 246 – 268 .
  • Hu , X. , Bay , M. and Downie , J. S. . Creating a simplified music mood classification ground-truth set . Proceedings 8th International Conference on Music Information Retrieval (ISMIR) . Vienna, Austria. pp. 309 – 310 .
  • Hu , X. and Downie , J. S. . Exploring mood metadata: relationships with genre, artist and usage metadata . Proceedings 8th International Conference on Music Information Retrieval (ISMIR) . Vienna, Austria.
  • Huron , D. 2008 . Sweet Anticipation: Music and the Psychology of Expectation , Cambridge, MA : MIT Press .
  • Husain , G. , Forde Thompson , W. and Schellenberg , E. G. 2002 . Effects of musical tempo and mode on arousal, mood, and spatial abilities . Music Perception , 20 ( 2 ) : 151 – 171 .
  • Izmirli , O. . An algorithm for audio key finding . Music Information Retrieval Evaluation eXchange (MIREX 2005), as part of the Proceedings 6th International Conference on Music Information Retrieval (ISMIR) . London, UK.
  • Jehan , T. Creating music by listening (PhD thesis). Massachusetts Institute of Technology, USA
  • Joachims , T. . Text categorization with support vector machines: Learning with many relevant features . Proceedings 10th European Conference on Machine Learning (ECML) . Chemnitz, Germany. Edited by: Nédellec , C. and Rouveirol , C. Vol. no. 1398 , pp. 137 – 142 . Heidelberg : Springer .
  • Juslin , P. N. 2000 . Cue utilization in communication of emotion in music performance: Relating performance to perception . Journal of Experimental Psychology: Human Perception and Performance , 26 ( 6 ) : 1797 – 1813 .
  • Juslin , P. N and Laukka , P. 2004 . Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening . Journal of New Music Research , 33 ( 3 ) : 217 – 238 .
  • Kaminska , Z. and Woolf , J. 2000 . Melodic line and emotion: Cooke's theory revisited . Psychology of Music , 28 ( 2 ) : 133 – 153 .
  • Katayose , H. , Imai , M. and Inokuchi , S. . Sentiment extraction in music . Proceedings 9th International Conference on Pattern Recognition (ICPR) . Rome, Italy. Vol. 2 , pp. 1083 – 1087 .
  • Kim , Y. E. , Schmidt , E. and Emelle , L. . Moodswings: A collaborative game for music mood label collection . Proceedings 9th International Conference on Music Information Retrieval (ISMIR) . Philadelphia, PA. pp. 231 – 236 .
  • Kononenko , I. and Šikonja , M. R. 2003 . Theoretical and empirical analysis of ReliefF and RReliefF . Machine Learning Journal , 53 : 23 – 69 .
  • Krumhansl , C. L. 1990 . Cognitive Foundations of Musical Pitch , New York : Oxford University Press .
  • Kurth , F. and Clausen , M. 1999 . Filter bank tree and M-Band wavelet packet algorithms in audio signal processing . IEEE Transactions on Signal Processing , 47 ( 2 ) : 549 – 554 .
  • Laurier , C. , Grivolla , J. and Herrera , P. . Multimodal music mood classification using audio and lyrics . Proceedings International Conference on Machine Learning and Applications . San Diego, CA.
  • Laurier , C. and Herrera , P. . Audio music mood classification using support vector machines . Proceedings MIREX 2007 (Abstracts) . Vienna, Austria.
  • Lee , C. M. , Narayanan , S. S. and Pieraccini , R. . Combining acoustic and language information for emotion recognition . Proceedings INTERSPEECH . Denver, CO. pp. 873 – 376 .
  • Lee , K. and Slaney , M. 2008 . Acoustic chord transcription and key extraction from audio using key-dependent hmms trained on synthesized audio . IEEE Transactions on Audio, Speech and Language Processing , 16 : 291 – 301 .
  • Leman , M. , Vermeulen , V. , Voogdt , L. D. and Moelants , D. . Using audio features to model the affective response to music . Proceedings International Symposium on Musical Acoustics . Nara, Japan.
  • Li , T. and Ogihara , M. . Detecting emotion in music . Proceedings 4th International Symposium on Music Information Retrieval (ISMIR) . Baltimore, MD. pp. 239 – 240 .
  • Lin , Y.-C. , Yang , Y.-H. , Chen , H. H. , Liao , I.-B. and Ho , Y.-C. . Exploiting genre for music emotion classification . Proceedings International Conference on Multimedia and Expo (ICME) . New York. pp. 618 – 621 . Baltimore, MD : IEEE .
  • Litman , D. and Forbes , K. . Recognizing emotions from student speech in tutoring dialogues . Proceedings ASRU . Virgin Islands. pp. 25 – 30 .
  • Liu , C. C. , Yang , Y. H. , Wu , P. H. and Chen , H. H. . Detecting and classifying emotion in popular music . Proceedings JCIS 2006 . Kaohsiung, Taiwan.
  • Liu , D. , Lu , L. and Zhang , H. J. . Automatic music mood detection from acoustic music data . Proceedings 4th International Symposium on Music Information Retrieval (ISMIR) . Baltimore, MD. pp. 81 – 87 .
  • Liu , H. , Liebermann , H. and Selker , T. . A model of textual affect sensing using real-world knowledge . Proceedings 7th International Conference on Intelligent User Interfaces (IUI) . Sanibel Island, FL. pp. 125 – 132 .
  • Liu , H. and Singh , P. 2004 . ConceptNet – a practical commonsense reasoning tool-kit . BT Technology Journal , 22 ( 4 ) : 211 – 226 .
  • Logan , B. . Mel frequency cepstral coefficients for music modeling . Proceedings 1st International Symposium on Music Information Retrieval (ISMIR) . Plymouth, MA.
  • Logan , B. , Kositsky , A. and Moreno , P. . Semantic analysis of song lyrics . Proceedings International Conference on Multimedia and Expo . Taipei, Taiwan. Vol. 2 , pp. 827 – 830 . Baltimore, MD : IEEE .
  • Lu , L. , Liu , D. and Zhang , H. 2006 . Automatic mood detection and tracking of music audio signals . IEEE Transactions on Audio, Speech & Language Processing , 14 ( 1 ) : 5 – 18 .
  • Mahedero , J. P.G. , Martinez , A. and Cano , P. . Natural language processing of lyrics . Proceedings International Conference on Multimedia (MM) . Singapore. New York : ACM .
  • Mandel , M. and Ellis , D. . LabROSA's audio music similarity and classification submissions . Proceedings MIREX 2007 (Abstracts) . Vienna, Austria.
  • Mandel , M. and Ellis , D. 2008 . A web-based game for collecting music metadata . Journal of New Music Research , 37 ( 2 ) : 151 – 165 .
  • Meyer , L. 1956 . Emotion and Meaning in Music , Chicago, IL : University of Chicago Press .
  • Müller , M. 2007 . Information Retrieval for Music And Motion , Berlin : Springer .
  • Müller , M. and Kurth , F. 2007 . Towards structural analysis of audio recordings in the presence of mucical variations . EURASIP Journal on Advances in Signal Processing , ID 89686
  • Pachet , F. and Zils , A. . Automatic extraction of music descriptors from acoustic signals . Proceedings 5th International Conference on Music Information Retrieval (ISMIR) . Barcelona, Spain.
  • Pang , B. , Lee , L. and Vaithyanathan , S. . Thumbs up? Sentiment classification using machine learning techniques . Proceedings Conference on Empirical Methods in Natural Language Processing (EMNLP) . Philadelphia, PA. pp. 79 – 86 .
  • Pauws , S. . Musical keyextraction from audio . Proceedings 5th International Conference on Music Information Retrieval (ISMIR) . Barcelona, Spain. pp. 96 – 99 .
  • Peeters , G. . Musical key estimation of audio signal based on hidden markov modeling of chroma vectors . Proceedings 9th International Conference on Digital Audio Effects (DAFx) . Montreal, Quebec.
  • Platt , J. 1998 . Sequential Minimal Optimization: A fast algorithm for training Support Vector Machines (Technical report MSR-98-14). Microsoft Research
  • Pohle , T. , Pampalk , E. and Widmer , G. . Evaluation of frequently used audio features for classification of music into perceptual categories . Proceedings 4th International Workshop on Content-based Multimedia Indexing (CBMI) . Riga, Latvia. Baltimore, MD : IEEE .
  • Polzin , T. S. and Waibel , A. . Emotion-sensitive human–computer interfaces . ISCA Tutorial and Research Workshop on Speech and Emotion . Newcastle, Northern Ireland. pp. 201 – 206 .
  • Porter , M. 2009 . Snowball programming language for stemmers Retrieved from http://snowball.tartarus.org/
  • Resnicow , J. E. , Salovey , P. and Repp , B. H. 2004 . Is recognition of emotion in music performance an aspect of emotional intelligence? . Music Perception , 22 ( 1 ) : 145 – 158 .
  • Rigg , M. G. 1939 . What features of a musical phrase have emotional suggestiveness? . Bulletin of the Oklahoma Agricultural and Mechanical College , 36 ( 13 ) : 29–38
  • Rigg , M. G. 1964 . The mood effects of music: A comparison of data from four investigators . The Journal of Psychology , 58 : 427 – 438 .
  • Russell , J. A. 1980 . A circumplex model of affect . Journal of Personality and Social Psychology , 39 ( 6 ) : 1161 – 1178 .
  • Scheirer , E. D. 1998 . Tempo and beat analysis of acoustic musical signals . Journal of the Acoustic Society of America , 103 ( 1 ) : 588 – 601 .
  • Scherer , K. 1991 . Emotion expression in speech and music . Music, Language, Speech and Brain , 29 ( 1 ) : 146 – 156 .
  • Schubert , E. 2003 . Update of the Hevner adjective checklist . Perceptual and Motor Skills , 96 : 1117 – 1122 .
  • Schubert , E. 2004 . Modeling perceived emotion with continuous musical features . Music Perception , 21 ( 4 ) : 561 – 585 .
  • Schuller , B. , Ablassmeier , M. , Müller , R. , Reifinger , S. , Poitschke , T. and Rigoll , G. 2006 . “ Speech communication and multimodal interfaces ” . In Advanced Man–machine Interaction , 141 – 190 . New York : Springer .
  • Schuller , B. , Batliner , A. and Steidl , S. . The INTERSPEECH 2009 emotion challenge . Proceedings INTERSPEECH . Brighton, UK. Bonn, Germany : ISCA .
  • Schuller , B. , Dibiasi , F. , Eyben , F. and Rigoll , G. . One day in half an hour: music thumbnailing incorporating harmony- and rhythm structure . Proceedings 6th Workshop on AMR 2008 . Berlin, Germany.
  • Schuller , B. , Eyben , F. and Rigoll , G. . Fast and robust meter and tempo recognition for the automatic discrimination of ballroom dance styles . Proceedings international Conference on Audio Speech and Signal Processing (ICASSP) . Honolulu, HI. Vol. I , pp. 217 – 230 . Baltimore, MD : IEEE .
  • Schuller , B. , Eyben , F. and Rigoll , G. 2008 . Tango or waltz? Putting ballroom dance style into tempo detection . EURASIP Journal on Audio, Speech, and Music Processing , 2008 ( 846135 ) : 12
  • Schuller , B. , Hörnler , B. , Arsic , D. and Rigoll , G. . Audio chord labeling by musiological modeling and beat-synchronization . Proceedings International Conference on Multimedia and Expo (ICME) . New York. pp. 526 – 529 . Baltimore, MD : IEEE .
  • Schuller , B. , Köhler , N. , Müller , R. and Rigoll , G. . Recognition of interest in human conversational speech . Proceedings INTERSPEECH . Pittsburgh, PA. Bonn, , Germany : ISCA .
  • Schuller , B. , Müller , R. , Lang , M. and Rigoll , G. . Speaker independent emotion recognition by early fusion of acoustic and linguistic features within ensembles . Proceedings INTERSPEECH . Lisbon, Portugal. pp. 805 – 808 . Bonn, Germany: ISCA .
  • Schuller , B. , Müller , R. , Rigoll , G. and Lang , M. . Applying bayesian belief networks in approximate string matching for robust keyword-based retrieval . Proceedings International Conference on Multimedia and Expo (ICME) . Taipei, Taiwan. Baltimore, MD : IEEE .
  • Schuller , B. , Rigoll , G. and Lang , M. . Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine-belief network architecture . Proceedings International Conference on Audio Speech and Signal Processing (ICASSP) . Montreal, Quebec. Vol. I , pp. 577 – 580 . Baltimore, MD : IEEE .
  • Schuller , B. , Rigoll , G. and Lang , M. . Matching monophonic audio clips to polyphonic recordings . Proceedings DAGA 2005 . Munich, Germany.
  • Schuller , B. , Schenk , J. , Rigoll , G. and Knaup , T. . ‘The Godfather’ vs. ‘Chaos’: Comparing linguistic analysis based on online knowledge sources and bags-of-N-Grams for movie review valence estimation . Proceedings International Conference on Document Analysis and Recognition (ICDAR) . Barcelona, Spain. Buffalo, NY : IAPR .
  • Schuller , B. , Wallhoff , F. , Arsic , D. and Rigoll , G. . Musical signal type discrimination based on large open feature sets . Proceedings International Conference on Multimedia and Expo (ICME) . Toronto, Canada. pp. 1098 – 1093 . Baltimore, MD : IEEE .
  • Skowronek , J. , McKinney , M. F and van de Par , S. . Ground truth for automatic music mood classification . Proceedings 7th International Conference on Music Information Retrieval . Victoria, Canada.
  • Sloboda , J. A. 1991 . Music structure and emotional response: some empirical findings . Psychology of Music , 19 : 110 – 120 .
  • Sloboda , J. A and Juslin , P. N. 2001 . “ Psychological perspectives on music and emotion ” . In Music and Emotion: Theory and Research , 71 – 104 . Oxford : Oxford University Press .
  • Smaragdis , P. and Brown , J. C. . Non-negative matrix factorization for polyphonic music transcription . International Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) . New Paltz, NY. pp. 177 – 180 . Baltimore, MD : IEEE .
  • Thayer , R. E. 1989 . The Biopsychology of Mood and Arousal , Oxford : Oxford University Press .
  • Tolos , M. , Tato , R. and Kemp , T. . Mood-based navigation through large collections of musical data . Proceedings 2nd CCNC 2005 . Las Vegas, NV. pp. 71 – 75 .
  • Trohidis , K. , Tsoumakas , G. , Kalliris , G. and Vlahavas , I. . Multi-label classification of music into emotions . Proceedings 9th International Conference on Music Information Retrieval (ISMIR) . Philadelphia, PA. pp. 325 – 330 .
  • Tzanetakis , G. 2002 . Musical genre classification of audio signals . IEEE Transactions on Speech and Audio Processing , 10 ( 5 ) : 293 – 302 .
  • Tzanetakis , G. . MARSYAS submissions to MIREX 2007 . Proceedings MIREX 2007 (Abstracts) . Vienna, Austria.
  • Tzanetakis , G. and Cook , P. 2000 . Marsyas: A framework for audio analysis . Organised Sound , 4 ( 3 ) : 169 – 175 .
  • Tzanetakis , G. , Essl , G. and Cook , P. . Automatic musical genre classification of audio signals . Proceedings 2nd International Symposium on Music Information Retrieval (ISMIR) . Bloomington, IN. pp. 205 – 210 .
  • Wang , M. , Zhang , N. and Zhu , H. . User-adaptive music emotion recognition . Proceedings IEEE International Conference on Signal Processing (ICSP) . Beijing, China. pp. 1352 – 1355 . Baltimore, MD : IEEE .
  • Webster , G. and Weir , C. 2005 . Emotional responses to music: Interactive effects of mode, texture, and tempo . Motivation and Emotion , 29 ( 1 ) : 19 – 39 .
  • Wieczorkowska , A. , Synak , P. , Lewis , R. and Ras , Z. 2005 . “ Extracting emotions from music data ” . In Foundations of Intelligent Systems , Vol. LNCS 3488/2005 , 456 – 465 . Berlin/Heidelberg : Springer .
  • Witten , I. H. and Frank , E. 2005 . Data Mining: Practical Machine Learning Tools and Techniques , (2nd ed.) , San Francisco, CA : Morgan Kaufmann .
  • Wöllmer , M. , Eyben , F. , Schuller , B. , Douglas-Cowie , E. and Cowie , R. . Data-driven clustering in emotional space for affect recognition using discriminatively trained lstm networks . Proceedings INTERSPEECH . Brighton, UK. Bonn, Germany: ISCA .
  • Wu , T.-L. and Jeng , S.-K. 2008 . “ Probabilistic estimation of a novel music emotion model ” . In Advances in Multimedia Modeling , Vol. LNCS 4903/2008 , 487 – 497 . New York : Springer .
  • Wu , T. , Khan , F. M. , Fisher , T. A. , Shuler , L. A. and Pottenger , W. 2005 . “ Posting act tagging using transformation-based learning ” . In Foundations of Data Mining and Knowledge Discovery , Edited by: Lin , T. Y. , Ohsuga , S. , Liau , C.-J. , Hu , X. and Tsumoto , S. 319 – 331 . Berlin-Heidelberg : Springer .
  • Xiao , Z. , Dellandréa , E. , Dou , W. and Chen , L. . What is the best segment duration for music mood analysis? . Proceedings International Workshop on Content-based Multimedia Indexing (CBMI) . London, UK. pp. 17 – 24 . Baltimore, MD : IEEE .
  • Yang , D. and Lee , W. . Disambiguating music emotion using software agents . Proceedings 5th International Conference on Music Information Retrieval . Barcelona, Spain.
  • Yang , Y.-H. , Lin , Y.-C. , Su , Y.-F. and Chen , H.-H. 2008 . A regression approach to music emotion recognition . IEEE Transactions on Audio, Speech and Language Processing , 16 : 448 – 457 .
  • Yang , Y.-H. , Lin , Y.-C. , Cheng , H.-T. , Liao , I.-B. , Ho , Y.-C. and Chen , H. H. 2008 . “ Toward multi-modal music emotion classification ” . In Advances in multimedia information processing – PCM , New York : Springer . (Vol. LNCS 5353/2008, pp. 70–79
  • Zhe , X. and Boucouvalas , A. C. . Text-to-emotion engine for real time internet communication . Proceedings International Symposium on Communication Systems, Networks, and DSPs . Staffordshire University, UK. pp. 164 – 168 .
  • Zhu , Y. . Music key detection for musical audio . Proceedings 11th International Multimedia Modelling Conference (MMM'05) . Melbourne, Australia.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.