References
- Aljanaki, A., Yang, Y. H., & Soleymani, M. (2017). Developing a benchmark for emotional analysis of music. PLoS ONE, 12(3), e0173392. https://doi.org/https://doi.org/10.1371/journal.pone.0173392
- Barthet, M., Fazekas, G., & Sandler, M. (2012, June 19–22). Music emotion recognition: From content- to context-based models. Proceedings of Ninth International Symposium on Computer Music Modeling and Retrieval, London, UK.
- Barthet, M., Marston, D., Baume, C., Fazekas, G., & Sandler, M. (2013, November 4–8). Design and evaluation of semantic mood models for music recommendation. Proceedings of 14th International Society for Music Information Retrieval Conference, Curitiba, PR, Brazil.
- Baume, C., Fazekas, G., Barthet, M., Marston, D., & Sandler, M. (2014, January 26–29). Selection of audio features for music emotion recognition using production music. Audio Engineering Society Conference: 53rd International Conference: Semantic Audio, London, UK.
- Berenzweig, A., Logan, B., Ellis, D. P., & Whitman, B. (2004). A large-scale evaluation of acoustic and subjective music-similarity measures. Computer Music Journal, 28(2), 63–76. https://doi.org/https://doi.org/10.1162/014892604323112257
- Bigand, E., Vieillard, S., Madurell, F., Marozeau, J., & Dacquet, A. (2005). Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts. Cognition & Emotion, 19(8), 1113–1139. https://doi.org/https://doi.org/10.1080/02699930500204250
- Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1), 49–59. https://doi.org/https://doi.org/10.1016/0005-7916(94)90063-9
- Calvo, Rafael A., & D'Mello, Sidney. (2010). Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1(1), 18–37. https://doi.org/https://doi.org/10.1109/T-AFFC.2010.1
- Chang, C. Y., Lo, C. Y., Wang, C. J., & Chung, P. C. (2010, December 16–18). A music recommendation system with consideration of personal emotion. International Computer Symposium (ICS2010), Tainan, Taiwan.
- Chen, Y. A., Yang, Y. H., Wang, J. C., & Chen, H. (2015, April 19–24). The AMG1608 dataset for music emotion recognition. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, Australia.
- Collier, G. L. (2007). Beyond valence and activity in the emotional connotations of music. Psychology of Music, 35(1), 110–131. https://doi.org/https://doi.org/10.1177/0305735607068890
- Cooke, D. (1990). The language of music (Reprint). Oxford University Press, USA.
- Deng, J. J., & Leung, C. (2012, October 23–25). Emotion-based music recommendation using audio features and user playlist. 6th International Conference on New Trends in Information Science, Service Science and Data Mining (ISSDM2012), Taipei, Taiwan.
- Dibben, N. (2004). The role of peripheral feedback in emotional experience with music. Music Perception, 22(1), 79–115. https://doi.org/https://doi.org/10.1525/mp.2004.22.1.79
- Drossos, K., Floros, A., Giannakoulopoulos, A., & Kanellopoulos, N. (2015). Investigating the impact of sound angular position on the listener affective state. IEEE Transactions on Affective Computing, 6(1), 27–42. https://doi.org/https://doi.org/10.1109/TAFFC.2015.2392768
- Eerola, T. (2011). Are the emotions expressed in music genre-specific? An audio-based evaluation of datasets spanning classical, film, pop and mixed genres. Journal of New Music Research, 40(4), 349–366. https://doi.org/https://doi.org/10.1080/09298215.2011.602195
- Eerola, T., Lartillot, O., & Toivianen, P. (2009, October 26–30). Prediction of multi-dimensional emotion ratings in music from audio using multivariate regression models. 10th International Society for Music Information Retrieval Conference (ISMIR 2009), Kobe, Japan.
- Eerola, T., & Vuoskoski, J. K. (2011). A comparison of the discrete and dimensional models of emotion in music. Psychology of Music, 39(1), 18–49. https://doi.org/https://doi.org/10.1177/0305735610362821
- Eerola, T., & Vuoskoski, J. K. (2012). A review of music and emotion studies: Approaches, emotion models, and stimuli. Music Perception: An Interdisciplinary Journal, 30(3), 307–340. https://doi.org/https://doi.org/10.1525/mp.2012.30.3.307
- Ekman, P. (1992). An argument for basic emotions. Cognition & Emotion, 6(3–4), 169–200. https://doi.org/https://doi.org/10.1080/02699939208411068
- Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. J. Power (Eds.), Handbook of cognition and emotion (pp. 45–60). John Wiley & Sons, Ltd.
- Gabrielsson, A. (2001). Emotions in strong experiences with music. In P. N. Juslin & J. A. Sloboda (Eds.), Music and emotion: Theory and research (pp. 431–449). Oxford University Press.
- Giannakopoulos, T., & Pikrakis, A. (2014). Introduction to audio analysis: A MATLAB approach. Academic Press.
- Griffiths, D., Cunningham, S., & Weinel, J. (2015, September 8–11). A self-report study that gauges perceived and induced emotion with music. Internet Technologies and Applications (ITA 15), Wrexham, UK.
- Griffiths, D., Cunningham, S., & Weinel, J. (2016, July 12–14). An interactive music playlist generator that responds to user emotion and context. Electronic Visualisation and the Arts (EVA), London, UK.
- Hadjidimitriou, S. K., & Hadjileontiadis, L. J. (2013). EEG-based classification of music appraisal responses using time-frequency analysis and familiarity ratings. IEEE Transactions on Affective Computing, 4(2), 161–172. https://doi.org/https://doi.org/10.1109/T-AFFC.2013.6
- Hagras, H. (2018). Toward human-understandable, explainable AI. Computer, 51(9), 28–36. https://doi.org/https://doi.org/10.1109/MC.2018.3620965
- Hu, X., & Kando, N. (2012, October 8–12). User-centered Measures vs. System effectiveness in finding similar songs. 13th International Society for Music Information Retrieval Conference (ISMIR 2012), Porto, Portugal.
- Hu, X., & Yang, Y. H. (2014, September 14–20). A study on cross-cultural and cross-dataset generalizability of music mood regression models. 40th International Computer Music Conference (ICMC 2014), Athens, Greece.
- Hu, X., & Yang, Y. H. (2017). Cross-dataset and cross-cultural music mood prediction: A case on western and Chinese pop songs. IEEE Transactions on Affective Computing, 8(2), 228–240. https://doi.org/https://doi.org/10.1109/TAFFC.2016.2523503
- Huq, A., Bello, J. P., & Rowe, R. (2010). Automated music emotion recognition: A systematic evaluation. Journal of New Music Research, 39(3), 227–244. https://doi.org/https://doi.org/10.1080/09298215.2010.513733
- Ilie, G., & Thompson, W. F. (2006). A comparison of acoustic cues in music and speech for three dimensions of affect. Music Perception, 23(4), 319–330. https://doi.org/https://doi.org/10.1525/mp.2006.23.4.319
- Jun, S., Rho, S., Han, B J., & Hwang, E. (2008, July 29–August 1). A fuzzy inference-based music emotion recognition system. 5th International Conference on Visual Information Engineering (VIE 2008), Xi'an, China.
- Juslin, P. N. (2000). Cue utilization in communication of emotion in music performance: Relating performance to perception. Journal of Experimental Psychology: Human Perception and Performance, 26(6), 1797–1813. https://doi.org/https://doi.org/10.1037/0096-1523.26.6.1797
- Juslin, P. N. (2009). Emotional responses to music. In S. Hallam, I. Cross, & M. Thaut (Eds.), Oxford Handbook of Music Psychology (1st Edn). Oxford University Press.
- Juslin, P. N. (2013). What does music express? Basic emotions and beyond. Frontiers in Psychology, 4, 596. https://doi.org/https://doi.org/10.3389/fpsyg.2013.00596
- Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research, 33(3), 217–238. https://doi.org/https://doi.org/10.1080/0929821042000317813
- Juslin, P. N., & Sloboda, J. (2011). Handbook of music and emotion: Theory, research, applications. Oxford University Press.
- Kallinen, K. (2005). Emotional ratings of music excerpts in the western art music repertoire and their self-organization in the Kohonen neural network. Psychology of Music, 33(4), 373–393. https://doi.org/https://doi.org/10.1177/0305735605056147
- Kamalzadeh, M., Baur, D., & Möller, T. (2012, October 8–12). A survey on music listening and management behaviours. 13th International Society for Music Information Retrieval Conference (ISMIR 2012), Porto, Portugal.
- Kim, Y. E., Schmidt, E. M., Migneco, R., Morton, B. G., Richardson, P., Scott, J., Speck, J. A., & Turnbull, D. (2010, August 9–13). Music emotion recognition: A state of the art review. 11th International Society for Music Information Retrieval Conference (ISMIR 2010), Utrecht, Netherlands.
- Koelstra, S., Muhl, C., Soleymani, M., Lee, J. S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., & Patras, I. (2011). Deap: A database for emotion analysis; using physiological signals. IEEE Transactions on Affective Computing, 3(1), 18–31. https://doi.org/https://doi.org/10.1109/T-AFFC.2011.15
- Krause, A. E., North, A. C., & Hewitt, L. Y. (2015). Music-listening in everyday life: Devices and choice. Psychology of Music, 43(2), 155–170. https://doi.org/https://doi.org/10.1177/0305735613496860
- Kreutz, G., Ott, U., Teichmann, D., Osawa, P., & Vaitl, D. (2008). Using music to induce emotions: Influences of musical preference and absorption. Psychology of Music, 36(1), 101–126. https://doi.org/https://doi.org/10.1177/0305735607082623
- Lartillot, O., & Toiviainen, P. (2007, September 10–15). A MATLAB toolbox for musical feature extraction from audio. 10th International Conference on Digital Audio Effects (DAFx-07), Bordeaux, France.
- Leman, M., Vermeulen, V., L. De Voogdt, Moelants, D., & Lesaffre, M. (2005). Prediction of musical affect using a combination of acoustic structural cues. Journal of New Music Research, 34(1), 39–67. https://doi.org/https://doi.org/10.1080/09298210500123978
- Lu, L., Liu, D., & Zhang, H. J. (2005). Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech, and Language Processing, 14(1), 5–18. https://doi.org/https://doi.org/10.1109/TSA.2005.860344
- Malheiro, R., Panda, R., Gomes, P., & Paiva, R. P. (2016). Emotionally-relevant features for classification and regression of music lyrics. IEEE Transactions on Affective Computing, 9(2), 240–254. https://doi.org/https://doi.org/10.1109/TAFFC.2016.2598569
- Mo, S., & Niu, J. (2017). A novel method based on OMPGW method for feature extraction in automatic music mood classification. IEEE Transactions on Affective Computing, 10(3), 313–324. https://doi.org/https://doi.org/10.1109/T-AFFC.5165369
- Mobasher, B., Cooley, R., & Srivastava, J. (2000). Automatic personalization based on web usage mining. Communications of the ACM, 43(8), 142–151. https://doi.org/https://doi.org/10.1145/345124.345169
- Myint, E. E. P., & Pwint, M. (2010, July 5–7). An approach for multi-label music mood classification. 2nd International Conference on Signal Processing Systems, Dalian, China.
- North, A. C., Hargreaves, D. J., & Hargreaves, J. J. (2004). Uses of music in everyday life. Music Perception, 22(1), 41–77. https://doi.org/https://doi.org/10.1525/mp.2004.22.1.41
- Panda, R., Malheiro, R., & Paiva, R. P. (2018). Novel audio features for music emotion recognition. IEEE Transactions on Affective Computing, 11(4), 614–626. https://doi.org/https://doi.org/10.1109/T-AFFC.5165369
- Panksepp, J. (1992). A critical role for ‘affective neuroscience’ in resolving what is basic about basic emotions. Psychological Review, 99(3), 554–560. https://doi.org/https://doi.org/10.1037/0033-295X.99.3.554
- Pesek, M., Strle, G., Kavčič, A., & Marolt, M. (2017). The Moodo dataset: Integrating user context with emotional and color perception of music for affective music information retrieval. Journal of New Music Research, 46(3), 246–260. https://doi.org/https://doi.org/10.1080/09298215.2017.1333518
- Resnicow, J. E., Salovey, P., & Repp, B. H. (2004). Is recognition of emotion in music performance an aspect of emotional intelligence? Music Perception, 22(1), 145–158. https://doi.org/https://doi.org/10.1525/mp.2004.22.1.145
- Ricci, F., Rokach, L., & Shapira, B. (2011). Introduction to recommender systems handbook. In F. Ricci, L. Rokach, B. Shapira, & P. B. Kantor (Eds.), Recommender systems handbook (pp. 1–35). Springer US.
- Ritossa, D. A., & Rickard, N. S. (2004). The relative utility of ‘pleasantness’ and ‘liking’ dimensions in predicting the emotions expressed by music. Psychology of Music, 32(1), 5–22. https://doi.org/https://doi.org/10.1177/0305735604039281
- Roda, A., Canazza, S., & De Poli, G. (2014). Clustering affective qualities of classical music: Beyond the valence-arousal plane. IEEE Transactions on Affective Computing, 5(4), 364–376. https://doi.org/https://doi.org/10.1109/TAFFC.2014.2343222
- Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178. https://doi.org/https://doi.org/10.1037/h0077714
- Saari, P., Barthet, M., Fazekas, G., Eerola, T., & Sandler, M. (2013). Semantic models of musical mood: Comparison between crowd-sourced and curated editorial tags. 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), 1–6.
- Saari, P., Fazekas, G., Eerola, T., Barthet, M., Lartillot, O., & Sandler, M. (2015). Genre-adaptive semantic computing and audio-based modelling for music mood annotation. IEEE Transactions on Affective Computing, 7(2), 122–135. https://doi.org/https://doi.org/10.1109/TAFFC.2015.2462841
- Scherer, Klaus R. (2004). Which emotions can be induced by music? What are the underlying mechanisms? And how can we measure them? Journal of New Music Research, 33(3), 239–251. https://doi.org/https://doi.org/10.1080/0929821042000317822
- Scherer, K. R. (2000). Psychological models of emotion. In J. C. Borod (Ed.), The neuropsychology of emotion (pp. 137–162). Oxford University Press.
- Scherer, K. R., Shuman, V., Fontaine, J., & Soriano Salinas, C. (2013). The GRID meets the wheel: Assessing emotional feeling via self-report. In J. J. R. Fontaine, K. R. Scherer, & C. Soriano (Eds.), Components of emotional meaning: A sourcebook. Oxford University Press.
- Schmidt, E. M., Turnbull, D., & Kim, Y. E. (2010, March 29–31). Feature selection for content-based, time-varying musical emotion regression. International Conference on Multimedia Information Retrieval (MIR '10), Philadelphia, PA.
- Schubert, E. (2003). Update of the Hevner adjective checklist. Perceptual and Motor Skills, 96(3 Suppl), 1117–1122. https://doi.org/https://doi.org/10.2466/pms.2003.96.3c.1117.
- Schubert, E. (2011). Continuous self-report methods. In P. N. Juslin & J. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications. Oxford University Press.
- Shao, B., Wang, D., Li, T., & Ogihara, M. (2009). Music recommendation based on acoustic features and user access patterns. IEEE Transactions on Audio, Speech, and Language Processing, 17(8), 1602–1611. https://doi.org/https://doi.org/10.1109/TASL.2009.2020893
- Southall, N. (2006). Imperfect sound forever. Stylus Magazine, 1.
- Stumpf, S., & Muscroft, S. (2011, July 11–15). When users generate music playlists: When words leave off, music begins? 2011 IEEE International Conference on Multimedia and Expo, Barcelona, Spain.
- Sun, X., & Tang, Y. (2009, December 12–14). Automatic music emotion classification using a new classification algorithm. Second International Symposium on Computational Intelligence and Design, Changsha, China.
- Thayer, R. E. (1990). The biopsychology of mood and arousal. Oxford University Press.
- Vuoskoski, J. K., & Eerola, T. (2011). Measuring music-induced emotion: A comparison of emotion models, personality biases, and intensity of experiences. Musicae Scientiae, 15(2), 159–173. https://doi.org/https://doi.org/10.1177/1029864911403367
- Wang, J. C., Yang, Y. H., Wang, H. M., & Jeng, S. K. (2015). Modeling the affective content of music with a Gaussian mixture model. IEEE Transactions on Affective Computing, 6(1), 56–68. https://doi.org/https://doi.org/10.1109/TAFFC.2015.2397457
- Watson, D., & Tellegen, A. (1985). Toward a consensual structure of mood. Psychological Bulletin, 98(2), 219–235. https://doi.org/https://doi.org/10.1037/0033-2909.98.2.219
- Watson, D., & Tellegen, A. (1999). Issues in dimensional structure of affect—Effects of descriptors, measurement error, and response formats: Comment on Russell and Carroll (1999). Psychological Bulletin, 125(5), 601–610. https://doi.org/https://doi.org/10.1037/0033-2909.125.5.601
- Wedin, L. (1972). Evaluation of a three-dimensional model of emotional expression in music. Psychological Laboratories. University of Stockholm.
- Weinel, J., Cunningham, S., Griffiths, D., Roberts, S., & Picking, R. (2014). Affective audio. Leonardo Music Journal, 24, 17–20. https://doi.org/https://doi.org/10.1162/LMJ_a_00189.
- Yang, Y., & Chen, H. (2011). Prediction of the Distribution of Perceived Music Emotions Using Discrete Samples. IEEE Transactions on Audio, Speech, and Language Processing, 19(7), 2184–2196. https://doi.org/https://doi.org/10.1109/TASL.2011.2118752
- Yang, Y. H., & Hu, X. (2012, October 8–12). Cross-cultural music mood classification: A comparison on English and Chinese songs. 13th International Society for Music Information Retrieval Conference (ISMIR 2012), Porto, Portugal.
- Yang, Y. H., Lin, Y. C., Cheng, H. T., & Chen, H. H. (2008, October 26–31). Mr. Emo: Music retrieval in the emotion plane. 16th ACM International Conference on Multimedia, Vancouver, British Columbia, Canada.
- Yang, Y. H., Lin, Y. C., Su, Y. F., & Chen, H. H. (2008). A regression approach to music emotion recognition. IEEE Transactions on Audio, Speech, and Language Processing, 16(2), 448–457. https://doi.org/https://doi.org/10.1109/TASL.2007.911513
- Zentner, M., & Eerola, T. (2010). Self-report measures and models. In P. N. Juslin & J. A. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications (pp. 187–221). Oxford University Press.
- Zentner, M., Grandjean, D., & Scherer, K. R. (2008). Emotions evoked by the sound of music: Characterization, classification, and measurement. Emotion, 8(4), 494–521. https://doi.org/https://doi.org/10.1037/1528-3542.8.4.494