4,961
Views
0
CrossRef citations to date
0
Altmetric
Articles

Audio features dedicated to the detection and tracking of arousal and valence in musical compositions

Pages 322-333 | Received 23 Oct 2017, Accepted 09 Apr 2018, Published online: 27 Apr 2018

References

  • Aljanaki, A., Yang, Y.-H., & Soleymani, M. (2016). Emotion in music task: Lessons learned. Working Notes Proceedings of the MediaEval 2016 Workshop. Hilversum, Netherlands.
  • Baume, C., Fazekas, G., Barthet, M., Marston, D., & Sandler, M. (2014). Selection of audio features for music emotion recognition using production music. Audio Engineering Society Conference: 53rd International Conference: Semantic Audio.
  • Bogdanov, D., Wack, N., Gómez, E., Gulati, S., Herrera, P., Mayor, O., & Serra, X. (2013). ESSENTIA: An audio analysis library for music information retrieval. Proceedings of the 14th International Society for Music Information Retrieval Conference (pp. 493–498). Curitiba, Brazil.
  • Deng, J. J., & Leung, C. H. C. (2015). Dynamic time warping for music retrieval using time series modeling of musical emotions. IEEE Transactions on Affective Computing, 6(2), 137–151. doi: 10.1109/TAFFC.2015.2404352
  • Doraisamy, S., Golzari, S., Norowi, N. M., Sulaiman, M. N., & Udzir, N. I. (2008). A study on feature selection and classification techniques for automatic genre classification of traditional Malay music. ISMIR'08, 9th International Conference on Music Information Retrieval (pp. 331–336). Philadelphia, PA: Drexel University.
  • Grekow, J. (2012). Mood tracking of musical compositions. In L. Chen, A. Felfernig, J. Liu, & Z. W. Raś (Eds.), Proceedings of the 20th International Conference on Foundations of Intelligent Systems (pp. 228–233). Berlin/Heidelberg: Springer.
  • Grekow, J. (2015). Audio features dedicated to the detection of four basic emotions. In K. Saeed & W. Homenda (Eds.), Computer Information Systems and Industrial Management: 14th IFIP TC 8 International Conference, CISIM 2015, Proceedings (pp. 583–591). Springer International Publishing, Cham.
  • Grekow, J. (2016). Music emotion maps in arousal–valence space. In K. Saeed & W. Homenda (Eds.), Computer Information Systems and Industrial Management: 15th IFIP TC8 International Conference, CISIM 2016, Proceedings (pp. 697–706). Springer International Publishing, Cham.
  • Grekow, J. (2017a). Audio features dedicated to the detection of arousal and valence in music recordings. In 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA) (pp. 40–44). IEEE, Gdynia, Poland.
  • Grekow, J. (2017b). Comparative analysis of musical performances by using emotion tracking. In M. Kryszkiewicz, A. Appice, D. Slezak, H. Rybinski, A. Skowron, & Z. W. Raś (Eds.), Foundations of Intelligent Systems: 23rd International Symposium, ISMIS 2017, Proceedings (pp. 175–184). Springer International Publishing, Cham.
  • Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., & Witten, I. H. (2009). The WEKA data mining software: An update. ACM SIGKDD Explorations Newsletter, 11(1), 10–18. doi: 10.1145/1656274.1656278
  • Kim, Y. E., Schmidt, E. M., Migneco, R., Morton, B. G., Richardson, P., Scott, J. J., & Turnbull, D. (2010). State of the art report: Music emotion recognition: A state of the art review. Proceedings of the 11th International Society for Music Information Retrieval Conference, ISMIR'10, Utrecht, Netherlands (pp. 255–266).
  • Kohavi, R., & John, G. H. (1997). Wrappers for feature subset selection. Artificial Intelligence, 97(1–2), 273–324. doi: 10.1016/S0004-3702(97)00043-X
  • Li, T., & Ogihara, M. (2003). Detecting emotion in music. ISMIR'03, 4th International Conference on Music Information Retrieval, Baltimore, MD, October 27–30, 2003.
  • Lin, Y., Chen, X., & Yang, D. (2013). Exploration of music emotion recognition based on midi. Proceedings of the 14th International Society for Music Information Retrieval Conference (pp. 221–226). Curitiba, Brazil.
  • Lu, L., Liu, D., & Zhang, H.-J. (2006). Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech and Language Processing, 14(1), 5–18. doi: 10.1109/TSA.2005.860344
  • Panda, R., Rocha, B., & Paiva, R. P. (2015). Music emotion recognition with standard and melodic audio features. Applied Artificial Intelligence, 29(4), 313–334. doi: 10.1080/08839514.2015.1016389
  • Quinlan, R. J. (1992). Learning with continuous classes. 5th Australian Joint Conference on Artificial Intelligence (pp. 343–348). Singapore: World Scientific.
  • Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178. doi: 10.1037/h0077714
  • Saari, P., Eerola, T., & Lartillot, O. (2011). Generalizability and simplicity as criteria in feature selection: Application to mood classification in music. IEEE Transactions on Audio, Speech, and Language Processing, 19(6), 1802–1812. doi: 10.1109/TASL.2010.2101596
  • Schmidt, E. M., Turnbull, D., & Kim, Y. E. (2010). Feature selection for content-based, time-varying musical emotion regression. Proceedings of the International Conference on Multimedia Information Retrieval (pp. 267–274). New York, NY: ACM.
  • Smola, A. J., & Schölkopf, B. (2004). A tutorial on support vector regression. Statistics and Computing, 14(3), 199–222. doi: 10.1023/B:STCO.0000035301.49549.88
  • Song, Y., Dixon, S., & Pearce, M. (2012). Evaluation of musical features for emotion classification. Proceedings of the 13th International Society for Music Information Retrieval Conference, ISMIR'12 (pp. 523–528). Porto, Portugal: Mosteiro S. Bento Da Vitória.
  • Tzanetakis, G., & Cook, P. (2000). Marsyas: A framework for audio analysis. Organised Sound, 4(3), 169–175. doi: 10.1017/S1355771800003071
  • Wang, Y., & Witten, I. H. (1997). Induction of model trees for predicting continuous classes. Poster papers of the 9th European Conference on Machine Learning. Springer.
  • Wieczorkowska, A., Synak, P., Lewis, R., & Raś, Z. W. (2005). Extracting emotions from music data. In M.-S. Hacid, N. V. Murray, Z. W. Raś, & S. Tsumoto (Eds.), Foundations of Intelligent Systems: 15th International Symposium, ISMIS'05, Proceedings, Saratoga Springs, NY, May 25–28, 2005 (pp. 456–465). Berlin/Heidelberg: Springer.
  • Xu, L., Yan, P., & Chang, T. (1988). Best first strategy for feature selection. Proceedings of the 9th International Conference on Pattern Recognition (Vol. 2, pp. 706–708), Rome, Italy.
  • Yang, Y.-H., & Chen, H. H. (2012). Machine recognition of music emotion: A review. ACM Transactions on Intelligent Systems and Technology, 3(3), 40:1–40:30. doi: 10.1145/2168752.2168754
  • Yang, Y.-H., Lin, Y.-C., Su, Y.-F., & Chen, H. H. (2008). A regression approach to music emotion recognition. IEEE Transactions on Audio, Speech, and Language Processing, 16(2), 448–457. doi: 10.1109/TASL.2007.911513