1,134
Views
15
CrossRef citations to date
0
Altmetric
Original Articles

Music Emotion Recognition with Standard and Melodic Audio Features

, &

REFERENCES

  • Adams, C. 1976. Melodic contour typology. Ethnomusicology 20:179–215.
  • Carvalho, V. R., and C. Chao. 2005. Sentiment retrieval in popular music based on sequential learning. In Proceedings of the 28th ACM SIGIR conference. New York, NY: ACM.
  • Cataltepe, Z., Y. Tsuchihashi, and H. Katayose. 2007. Music genre classification using MIDI and audio features. EURASIP Journal on Advances in Signal Processing 2007(1): 275–279.
  • Ekman, P. 1992. An argument for basic emotions. Cognition and Emotion 6(3):169–200.
  • Feng, Y., Y. Zhuang, and Y. Pan. 2003. Popular music retrieval by detecting mood. In Proceedings of the 26th annual international ACM SIGIR conference on research and development in information retrieval, 375–376. New York, NY: ACM.
  • Friberg, A. 2008. Digital audio emotions - an overview of computer analysis and synthesis of emotional expression in music. Paper presented at the 11th International Conference on Digital Audio Effects, Espoo, Finland, September 1–4.
  • Friberg, A., and A. Hedblad. 2011. A comparison of perceptual ratings and computed audio features. In Proceedings of the 8th sound and music computing conference, 122–127. SMC.
  • Gabrielsson, A., and E. Lindström. 2001. The influence of musical structure on emotional expression. In Music and emotion: Theory and research, ed. P. N. Juslin and J. A. Sloboda, 223–248. Oxford, UK: Oxford University Press.
  • Hall, M., E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. 2009. The WEKA data mining software: An update. SIGKDD Explorations 11(1):10–18.
  • Hevner, K. 1936. Experimental studies of the elements of expression in music. American Journal of Psychology 48(2):246–268.
  • Hu, X., and J. S. Downie. 2010. When lyrics outperform audio for music mood classification: A feature analysis. In Proceedings of the 11th international society for music information retrieval conference (ISMIR 2010) , 619–624. Utrecht, The Netherlands: ISMIR.
  • Hu, X., J. S. Downie, C. Laurier, M. Bay, and A. F. Ehmann. 2008. The 2007 Mirex audio mood classification task: Lessons learned. In Proceedings of the 9th international society for music information retrieval conference (ISMIR 2011), 462–467. Philadelphia, PA, USA: ISMIR.
  • Huron, D. 2000. Perceptual and cognitive applications in music information retrieval. Cognition 10(1):83–92.
  • Katayose, H., M. Imai, and S. Inokuchi. 1988. Sentiment extraction in music. In Proceedings of the 9th international conference on pattern recognition, 1083–1087. IEEE.
  • Kellaris, J. J., and R. J. Kent. 1993. An exploratory investigation of responses elicited by music varying in tempo, tonality, and texture. Journal of Consumer Psychology 2(4):381–401.
  • Konishi, T., S. Imaizumi, and S. Niimi. 2000. Vibrato and emotion in singing voice (abstract). In Proceedings of the sixth international conference on music perception and cognition (ICMPC), August 2000 (CD-rom). Keele, UK: Keele University.
  • Lartillot, O., and P. Toiviainen. 2007. A Matlab toolbox for musical feature extraction from audio. In Proceedings of the 10th international conference on digital audio effects, 237–244. Bordeaux, France: ICDAFx-07.
  • Laurier, C. 2011. Automatic classification of musical mood by content-based analysis ( PhD Thesis, Universitat Pompeu Fabra, Barcelona, Spain).
  • Laurier, C., and P. Herrera. 2007. Audio music mood classification using support vector machine. In MIREX task on Audio Mood Classification 2007. Proceedings of the 8th international conference on music information retrieval, September 23–27, 2007. Vienna, Austria.
  • Laurier, C., O. Lartillot, T. Eerola, and P. Toiviainen. 2009. Exploring relationships between audio features and emotion in music. In Proceedings of the 7th triennial conference of European Society for the Cognitive Sciences of Music (ESCOM 2009) Jyväskylä, Finland, 260–264.
  • Li, T., and M. Ogihara. 2003. Detecting emotion in music. In Proceedings of the 2003 international symposium on music information retrieval (ISMIR 03), 239–240. ISMIR.
  • Li, T., and M. Ogihara, M. 2004. Content-based music similarity search and emotion detection. In Proceedings of the 2004 IEEE international conference on acoustics, speech, and signal processing, 5:V–705. IEEE.
  • Liu, D., and L. Lu. 2003. Automatic mood detection from acoustic music data. International Journal on the Biology of Stress 8(6):359–377.
  • Lu, L., D. Liu, and H.-J. Zhang. 2006. Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech and Language Processing 14(1):5–18.
  • McVicar, M., and T. Freeman. 2011. Mining the correlation between lyrical and audio features and the emergence of mood. In Proceedings of the 12th international society for music information retrieval conference (ISMIR 2011), 783–788. Miami, FL, USA: ISMIR.
  • Meng, A., P. Ahrendt, J. Larsen, and L. K. Hansen. 2007. Temporal feature integration for music genre classification. IEEE Transactions on Audio, Speech and Language Processing 15(5):275–279.
  • Meyers, O. C. 2007. A mood-based music classification and exploration system (Master’s thesis, Massachusetts Institute of Technology, Cambridge, MA, USA).
  • Ortony, A., and T. J. Turner. 1990. What’s basic about basic emotions? Psychological Review 97(3):315–331.
  • Panda, R., and R. P. Paiva. 2012. Music emotion classification: Dataset acquisition and comparative analysis. Paper presented at the 15th International Conference on Digital Audio Effects (DAFx-12), York, UK, October.
  • Panda, R., B. Rocha, and R. Paiva. 2013. Dimensional music emotion recognition: Combining standard and melodic audio features. Paper presented at the Computer Music Modelling and Retrieval - CMMR’2013. Marseille, France, October 15–18.
  • Robnik-Šikonja, M., and I. Kononenko. 2003. Theoretical and empirical analysis of relieff and rrelieff. Machine Learning 53(1–2):23–69.
  • Rocha, B. 2011. Genre classification based on predominant melodic pitch contours ( Master’s thesis, Universitat Pompeu Fabra, Barcelona, Spain).
  • Russell, J. A. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39(6):1161–1178.
  • Salamon, J., and E. Gomez. 2012. Melody extraction from polyphonic music signals using pitch contour characteristics. IEEE Transactions on Audio Speech and Language Processing 20(6):1759–1770.
  • Salamon, J., B. Rocha, and E. Gómez. 2012. Musical genre classification using melody features extracted from polyphonic music signals. In Proceedings of the IEEE international conference on acoustics, speech and signal processing (ICASSP). Kyoto, Japan: IEEE.
  • Schubert, E. 1999. Measurement and time series analysis of emotion in music (PhD Thesis, School of Music and Music Education, University of New South Wales, Sydney, Australia).
  • Seashore, C. 1967. Psychology of music. New York, NY: Dover.
  • Song, Y., S. Dixon, and M. Pearce. 2012. Evaluation of musical features for emotion classification. In Proceedings of the 13th international society for music information retrieval conference (ISMIR 2012), 523–528. Porto, Portugal. ISMIR.
  • Sundberg, J. 1987. The science of the singing voice. Dekalb, IL, USA: Northern Illinois University Press.
  • Thayer, R. E. 1989. The biopsychology of mood and arousal. New York, NY: Oxford University Press.
  • Wang, J., H. Lee, S. Jeng, and H. Wang. 2010. Posterior weighted Bernoulli mixture model for music tag annotation and retrieval. Paper presented at the APSIPA Annual Summit and Conference (ASC) 2010. December 14–17, 2010. Biopolis, Singapore.
  • Wang, J., H.-Y. Lo, S. Jeng, and H.-M. Wang. 2010. Audio classification using semantic transformation and classifier ensemble. In Proceedings of the 6th International WOCMAT and New Media Conference (WOCMAT 2010), YZU, Taoyuan, Taiwan, November 12–13, 2010, 2–5.
  • Yang, D., and W. Lee. 2004. Disambiguating music emotion using software agents. In Proceedings of the 5th international conference on music information retrieval, 52–58. Barcelona, Spain: ISMIR.
  • Yang, Y.-H, Y.-C. Lin, H. Cheng, I. Liao, Y. Ho, and H. H. Chen. 2008. Toward multi-modal music emotion classification. In Proceedings of the 2008 Pacific-rim conference on multimedia, LNCS 5353:70–79. Berlin, Heidelberg: Springer.
  • Yang, Y.-H., Y.-C. Lin, Y.-F. Su, and H. H. Chen. 2008. A regression approach to music emotion recognition. IEEE Transactions on Audio, Speech, and Language Processing 16(2):448–457.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.