307
Views
6
CrossRef citations to date
0
Altmetric
Original Articles

Inter- Versus Intra-singer Similarity and Variation in Vocal Performances

Pages 252-264 | Received 15 Oct 2015, Accepted 18 Jun 2016, Published online: 14 Jul 2016

References

  • Bernays, M., & Traube, C. (2014). Investigating pianists’ individuality in the performance of five timbral nuances through patterns of articulation, touch, dynamics, and pedaling. Frontiers in Psychology, 5, 157. doi:10.3389/fpsyg.2014.00157
  • Chaffin, R., Lemieux, A., & Chen, C. (2007). “It is different each time I play”: Variability in highly prepared musical performance. Music Perception, 24(5), 455–472.
  • Cook, N. (2007). Performance analysis and Chopin’s mazurkas. Musicae Scientiae, 11(2), 183–207.
  • de Cheveigné, A. (2002). YIN MATLAB implementation. Retrieved from http://audition.ens.fr/adc/sw/yin.zip
  • de Cheveigné, A., & Kawahara, H. (2002). YIN, a fundamental frequency estimator for speech and music. JASA, 111(4), 1917–1930.
  • Deshmukh, S.H., & Bhirud, S. (2014). North Indian classical music’s singer identification by timbre recognition using MIR Toolbox. International Journal of Computer Applications, 91(4), 1–5. doi:10.5120/15866-4804
  • Devaney, J. (2015a). Evaluating singer consistency and uniqueness in vocal performances. In T. Collins, D. Meredith, & A. Volk (Eds.), Mathematics and computation in music (pp. 173–178). New York: Springer International Publishing.
  • Devaney, J. (2015b). Recapturing the data in Seashore’s musical performance measurements. Musicae Scientae, 19(2), 214–222.
  • Devaney, J., Mandel, M., Ellis, D.P.W., & Fujinaga, I. (2011a). Characterizing singing voice fundamental frequency trajectories. In Proceedings of the Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) (pp. 73–76). Piscataway, NJ: IEEE.
  • Devaney, J., Mandel, M., Ellis, D.P.W., & Fujinaga, I. (2011b). Automatically extracting performance data from recordings of trained singers. Psychomusicology: Music, Mind and Brain, 21(1–2), 108–136.
  • Devaney, J., Mandel, M.I., & Ellis, D.P.W. (2009). Improving MIDI-audio alignment with acoustic features. In Proceedings of the Workshop on Applications of Signal Processing to Acoustics and Audio (pp. 45–48). IEEE: Piscataway, NJ.
  • Dodson, A. (2011). Expressive timing in expanded phrases: an empirical study of recordings of three Chopin preludes. Music Performance Research, 4, 2–29.
  • Ellis, D.P.W. (2012). PLP and RASTA (and MFCC, and inversion) in Matlab. Retrieved from http://labrosa.ee.columbia.edu/matlab/rastamat/
  • Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., & Lin, C.-J. (2008). LIBLINEAR: A library for large linear classification. The Journal of Machine Learning Research, 9, 1871–1874.
  • Flossmann, S., Grachten, M., & Widmer, G. (2008). Experimentally investigating the use of score features for computational models of expressive timing. In Proceedings of the International Conference on Music Perception and Cognition. Sapporo ICMPC 2008, Sapporo, Japan.
  • Flossmann, S., Grachten, M., & Widmer, G. (2009). Expressive performance rendering: Introducing performance context. In Proceedings of the Sound and Music Computing Conference (pp. 155–160). http://smcnetwork.org/node/1208
  • Genesis Acoustics Loudness Toolbox for Matlab. (2010). Retrieved from http://www.genesis-acoustics.com/
  • Gingras, B. (2014). Individuality in music performance: introduction to the research topic. Frontiers in Psychology, 5, 661. doi:10.3389/fpsyg.2014.00661
  • Gingras, B., Asselin, P.-Y., & McAdams, S. (2014). Individuality in harpsichord performance: Disentangling performer- and piece-specific influences on interpretive choices. Frontiers in Psychology, 4, 895. doi:10.3389/fpsyg.2013.00895
  • Gingras, B., Lagrandeur-Ponce, T., Giordano, B.L., & McAdams, S. (2011). Perceiving musical individuality: Performer identification is dependent on performer expertise and expressiveness, but not on listener expertise. Perception, 40, 1206–1220.
  • Glasberg, B.R., & Moore, B.C.J. (2002). A model of loudness applicable to time-varying sounds. Journal of the Audio Engineering Society, 50(5), 331–342.
  • Gockel, H., Moore, B.C.J., & Carlyon, R.P. (2001). Influence of rate of change of frequency on the overall pitch of frequency-modulated tones. Journal of the Acoustical Society of America, 109(2), 701–712.
  • Goebl, W., Pampalk, E., & Widmer, G. (2004). Exploring expressive performance trajectories: Six famous pianists play six Chopin pieces. In Proceedings of the 8th International Conference on Music Perception and Cognition. 505–509. Causal Productions, Sydney, Australia.
  • Grachten, M., & Widmer, G. (2012). Linear basis models for prediction and analysis of musical expression. Journal of New Music Research, 41(4), 311–322.
  • Huron, D. (2006). Sweet anticipation: Music and the psychology of expectation. Cambridge, MA: MIT Press.
  • Koren, R., & Gingras, B. (2014). Perceiving individuality in harpsichord performance. Frontiers in Psychology, 5(141), 1–13.
  • Kroher, N., & Gómez, E. (2014, Sept.). Automatic singer identification for improvisational styles based on vibrato, timbre and statistical performance descriptors. Paper presented at International Computer Music Conference/Sound and Music Computing Conference; Athens, Greece. 14–20. University of Michigan Library, Ann Arbor, Michigan.
  • Lagrange, M., Ozerov, A., & Vincent, E. (2012). Robust singer identification in polyphonic music using melody enhancement and uncertainty-based learning. Proceedings of the 13th International Society for Music Information Retrieval Conference (ISMIR) (6pp). Canada: International Society for Music Information Retrieval.
  • Lerdahl, F., & Jackendoff, R. (1983). A generative theory of tonal music. Cambridge, MA: MIT Press.
  • Li, P.-C., Su, L., Yang, Y.-H., & Su, A.W. (2015). Analysis of expressive musical terms in violin using score-informed and expression-based audio features. In Proceedings of the International Society for Music Information Retrieval Conference (pp. 809–815). Canada: International Society for Music Information Retrieval.
  • Livingstone, S.R., Choi, D.H., & Russo, F.A. (2014). The influence of vocal training and acting experience on measures of voice quality and emotional genuineness. Frontiers in Psychology, 5, 156. doi:10.3389/fpsyg.2014.00156.
  • Loehr, J.D., & Palmer, C. (2009). Sequential and biomechanical factors constrain timing and motion in tapping. Journal of Motor Behavior, 41(2), 128–136.
  • Loni, D.Y., & Subbaraman, S. (2013). Extracting acoustic features of singing voice for various applications related to MIR: A review. Proceedings of International Conference on Advances in Signal Processing and Communication 2013 (pp. 66–71, doi: 03.LSCS.2013.3). Washington, DC: ACEEE.
  • Molina-Solana, M., Arcos, J.L., & Gomez, E. (2008). Using expressive trends for identifying violin performers. Proceedings of the ISMIR (pp. 495–500). Canada: International Society for Music Information Retrieval.
  • Neiberg, D., Laukka, P., & Elfenbein, H.A. (2011). Intra-, inter-, and cross-cultural classification of vocal affect. In Proceedings of the INTERSPEECH (pp. 1581–1584). Baixas, France: International Speech Communication Association.
  • Peeters, G., Giordano, B.L., Susini, P., Misdariis, N., & McAdams, S. (2011). The timbre toolbox: Extracting audio descriptors from musical signals. The Journal of the Acoustical Society of America, 130(5), 2902–2916.
  • Ramirez, R., Maestre, E., Perez, A., & Serra, X. (2011). Automatic performer identification in celtic violin audio recordings. Journal of New Music Research, 40(2), 165–174.
  • Ramirez, R., Maestre, E., Pertusa, A., Gómez, E., & Serra, X. (2007). Performance-based interpreter identification in saxophone audio recordings. IEEE Transactions on Circuits and Systems for Video Technology, 17(3), 356–364.
  • Regnier, L., & Peeters, G. (2012). Singer verification: singer model. vs. song model. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 437–440). Piscataway, NJ: IEEE.
  • Repp, B. (1990). Patterns of expressive timing in performances of a Beethoven minuet by nineteen famous pianists. Journal of the Acoustical Society of America, 88(2), 622–641.
  • Repp, B. (1992). Diversity and commonality in music performance: An analysis of timing microstructure in Schumann’s “Träumerei”. Journal of the Acoustical Society of America, 92(5), 2546–2568.
  • Repp, B. (1997). Book review: Melodic intonation: Psychoacoustics, and the violin by Janina Fyk. Music Perception, 14(4), 477–486.
  • Repp, B.H. (1996). The dynamics of expressive piano performance: Schumann’s “Träumerei” revisited. The Journal of the Acoustical Society of America, 100(1), 641–650.
  • Sapp, C. (2008). Hybrid numeric/rank similarity metrics for musical performance analysis. Proceedings of the International Society for Music Information Retrieval Conference (pp. 501–506). Canada: International Society for Music Information Retrieval.
  • Seashore, C. (1938). Psychology of music. Iowa City, IA: University of Iowa Press (Original edition, New York: Dover Publications).
  • Spiro, N., Gold, N., & Rink, J. (2010). The form of performance: analyzing pattern distribution in select recordings of Chopin’s Mazurka Op. 24 No. 2. Musicae Scientiae, 14(2), 23–55.
  • Tobudic, A., & Widmer, G. (2003). Playing Mozart phrase by phrase. In K.D. Ashley & D.G. Bridge (Eds.), Case-based reasoning research and development (Lecture Notes in Computer Science, Vol. 2689, pp. 552–566). Berlin, Heidelberg: Springer.
  • Todd, N. (1985). A model of expressive timing in tonal music. Music Perception, 31(1), 33–58.
  • Todd, N. (1992). The dynamics of dynamics: A model of musical expression. Journal of the Acoustical Society of America, 91(6), 3540–3550.
  • Van Vugt, F.T., Jabusch, H.-C., & Altenmüller, E. (2013). Individuality that is unheard of: Systematic temporal deviations in scale playing leave an inaudible pianistic fingerprint. Frontiers in Psychology, 4, 134. doi:10.3389/fpsyg.2013.00134
  • Widmer, G. (2002). Machine discoveries: A few simple, robust local expression principles. Journal of New Music Research, 31, 37–50.
  • Widmer, G. (2003). Discovering simple rules in complex data: A meta-learning algorithm and some surprising musical discoveries. Artificial Intelligence, 146(2), 129–48.
  • Widmer, G., Dixon, S. E., Goebl, W., Pampalk, E., & Tobudic, A. (2003). In search of the Horowitz factor. AI Magazine, 24, 111–130.
  • Widmer, G., & Goebl, W. (2004). Computational models of expressive music performance: The state of the art. Journal of New Music Research, 33(3), 206–216.
  • Widmer, G., & Tobudic, A. (2003). Playing Mozart by analogy: Learning multi-level timing and dynamics strategies. Journal of New Music Research, 32, 259–268.
  • Windsor, W. L., & Clarke, E. F. (1997). Expressive timing and dynamics in real and artificial musical performances: Using an algorithm as an analytical tool. Music Perception, 15, 127–152.
  • Wöllner, C. (2013). How to quantify individuality in music performance? Studying artistic expression with averaging procedures. Frontiers in Psychology, 4, 361. doi:10.3389/fpsyg.2013.00361
  • Zhang, T. (2003). Automatic singer identification. In Proceedings of the 2003 International Conference on Multimedia and Expo, ICME’03 (I-33-6, Vol. 1). Piscataway, NJ: IEEE.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.