102
Views
0
CrossRef citations to date
0
Altmetric
Articles

Event-based Multitrack Alignment using a Probabilistic Framework

&
Pages 71-82 | Received 12 Dec 2012, Accepted 20 Dec 2014, Published online: 14 Sep 2015

References

  • Arzt, A., Böck, S., & Widmer, G. (2012). Fast identification of piece and score position via symbolic fingerprinting. In Proceedings of the International Conference on Music Information Retrieval (ISMIR) (pp. 433–438). Canada: International Society for Music Information Retrieval.
  • Arzt, A., & Widmer, G. (2010). Simple tempo models for real-time music tracking. In Proceedings of the 7th Sound and Music Computing Conference, Barcelona, 2010.
  • Arzt, A., Widmer, G., & Dixon, S. (2008). Automatic page turning for musicians via real-time machine listening. In Proceedings of the 18th European Conference on Artificial Intelligence (ECAI) (pp. 241–245). Amsterdam: IOS Press.
  • Bartsch, M.A., & Wakefield, G.A. (2001). To catch a chorus: Using chroma-based representations for audio thumbnailing. In IEEE Workshop on the Applications of Signal Processing to Audio and Acoustics, New Platz, NY, 2001 (pp. 15–18). Piscataway, NJ: IEEE.
  • Bello, J.P., Daudet, L., Abdallah, S., Duxbury, C., Davies, M., & Sandler, M. (2005). A tutorial on onset detection in music signals. IEEE Transactions on Speech and Audio in Processing, 13 (5, Part 2), 1035–1047.
  • Cannam, C., Landone, C., Sandler, M.B., & Bello, J. (2006). The Sonic Visualiser: A visualisation platform for semantic descriptors from musical signals. Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR-06) (4pp). Victoria, BC: University of Victoria.
  • Cemgil, A.T., Kappen, H.J., Desain, P., & Honing, H. (2001). On tempo tracking: Tempogram Representation and Kalman filtering. Journal of New Music Research, 28 (4), 259–273.
  • Cheveigné, A. de, & Kawahara, H. (2002). Yin, a fundamental frequency estimator for speech and music. The Journal of the Acoustical Society of America, 111 (4), 1917–1930.
  • Cont, A. (2008). Antescofo: Anticipatory synchronization and control of interactive parameters in computer music. In Proceedings of the 2008 International Computer Music Conference (pp. 33–40). San Francisco, CA: International Computer Music Association.
  • Cont, A. (2011). On the creative use of score following and its impact on research. In Proceedings of the 8th Sound and Music Computing Conference (SMC), Padova (http://articles.ircam.fr/textes/Cont11a/index.pdf). Paris: IRCAM.
  • Dannenberg, R.B. (1984). An on-line algorithm for real-time accompaniment. In Proceedings of the 1984 International Computer Music Conference (pp. 193–198). San Francisco, CA: International Computer Music Association.
  • Dannenberg, R.B. (2005). Toward automated holistic beat tracking, music analysis and understanding. In Proceedings of the International Conference on Music Information Retrieval (pp. 366–373). London: Queen Mary, University of London.
  • Dannenberg, R.B. (2007). An intelligent multi-track audio. In Proceedings of the International Computer Music Conference (pp. 89–94). San Francisco, CA: International Computer Music Association.
  • Davies, M. E. P., & Plumbley, M.D. (2007). Context-dependent beat tracking of musical audio. IEEE Transactions on Audio, Speech and Language Processing, 15 (3), 1009–1020.
  • Dixon, S. (2005). Match: A music alignment tool chest. In Proceedings of the 6th International Conference on Music Information Retrieval (ISMIR-05) (pp. 492–497).
  • Duan, Z., & Pardo, B. (2011). A state space model for online polyphonic audio-score alignment. Proceedings of the International Conference on Acoustics, Speech and Signal Processing (pp. 197–200).
  • Ewert, S., Müller, M., & Grosche, P. (2009). High resolution audio synchronization using chroma onset features. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (pp. 1869–1872). Piscataway, NJ: IEEE.
  • Gold, N., & Dannenberg, R.B. (2011). A reference architecture and score representation for popular music human-computer music performance systems. In Proceedings of the 2011 International Conference on New Interfaces for Musical Expression (p. 36–39).
  • Goto, M., Hashiguchi, H., Nishimura, T., & Oka, R. (2002). RWC music database: Popular, classical, and jazz music databases. In Proceedings of the 3rd International Conference on Music Information Retrieval (ISMIR 2002) (pp. 287–288). Paris: Centre Pompidou.
  • Grubb, L., & Dannenberg, R.B. (1997). A stochastic method of tracking a performer. In Proceedings of the 1997 International Computer Music Conference (pp. 301–308). San Francisco, CA: International Computer Music Association.
  • Hu, N., Dannenberg, R.B., & Tzanzetakis, G. (2003). Polyphonic audio matching and alignment for music retrieval. In Proceedings of the 2003 International Computer Music Conference (pp. 185–188). San Francisco, CA: International Computer Music Association.
  • Joder, C., Essid, S., & Richard, G. (2011). A conditional random field framework for robust and scalable audio-to-score matching. In IEEE Transactions on Audio, Speech and Language Processing, 19 (8), 2385–2397.
  • Kalman, R.E. (1960). A new approach to linear filtering and prediction problems. Transaction of the AMSE - Journal of Basic Engineering, 82 (Series D), 35–45.
  • Lago, N.P., & Kon, F. (2004). The quest for low latency. Proceedings of the 2004 International Computer Music Conference (pp. 33–36). San Francisco, CA: International Computer Music Association.
  • Montecchio, N., & Cont, A. (2011). A unified approach to real time audio-to-score and audio-to-audio alignment using sequential Monte Carlo techniques. In Proceedings of the 2011 International Conference on Audio Speech and Signal Processing (ICASSP 2011) (pp. 193–196). Piscataway, NJ: IEEE.
  • Müller, M. (2007). Information retrieval for music and motion. Berlin: Springer.
  • Niedermeyer, B., & Widmer, G. (2010). A multi-pass algorithm for accurate audio-to-score alignment. In Proceedings of the 11th International Society for Music Information Retrieval Conference (ISMIR), Utrecht (pp. 417–422). Canada.
  • Orio, N., & Dechelle, F. (2001). Score following using spectral analysis and hidden Markov models. In Proceedings of the 2001 International Computer Music Conference (pp. 105–109). San Francisco, CA: International Computer Music Association.
  • Orio, N., Lemouton, S., & Schwarz, D. (2003). Score Following: State of the art and new developments. In Proceedings of the 2003 Conference on New Interfaces for Musical Expression (pp. 36–41).
  • Otsuka, T., Nakadai, K., Takahashi, T., Komatani, K., Ogata, T., & Okuno, H.G. (2010). Design and implementation of two-level synchronization for an interactive music robot. In Proceedings of the Twenty-Fourth Conference on Artificial Intelligence, AAAI 2010 (pp. 1238–1244). Palo Alto, CA: AAAI Press.
  • Pardo, B., & Birmingham, W. (2002). Improved score following for acoustic performers. Proceedings of the 2002 International Computer Music Conference. San Francisco, CA: International Computer Music Association.
  • Peeling, P., Cemgil, A.T., & Godsill, S. (2007). A probabilistic framework for matching music representations. In Proceedings of th 8th international conference on music information retrieval (ISMIR 2007) (pp. 267–272). Vienna: Austrian Computer Society (OCG).
  • Puckette, M. (1992). Score following in practice. Proceedings of 1992 International Computer Music Conference (pp. 182–185). San Francisco, CA: International Computer Music Association.
  • Rabiner, L. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77 (2), 257–286.
  • Raphael, C. (1999). A probabilistic expert system for automatic accompaniment. Journal of Computational and Graphical Statistics, 10 (3), 467–512.
  • Raphael, C. (2006). Aligning music audio with symbolic scores using a hybrid graphical model. Machine Learning, 21 (4), 360–370.
  • Raphael, C. (2010). Music Plus One and machine learning. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 2010.
  • Reiss, J.D. (2011). Intelligent systems for mixing multitrack audio. In Proceedings of the International Conference on Digital Signal Processing, DSP2011 (pp. 1–6). Piscataway, NJ: IEEE.
  • Shepherd, R. (1964). Circularity in judgements of relative pitch. Journal of the Acoustical Society of America, 36, 2346–2353.
  • Vercoe, B. (1984). The Synthetic Performer in the context of live performance. Proceedings of the 1984 International Computer Music Conference (pp. 199–200). San Francisco, CA: International Computer Music Association.
  • Vercoe, B., & Puckette, M. (1985). Synthetic Rehearsal, training the Synthetic Performer. Proceedings of the 1985 International Computer Music Conference (pp. 275–278). San Francisco, CA: International Computer Music Association.
  • Wakefield, G.H. (1999). Mathematical representation of joint time-chroma distributions. Proceedings of the SPIE Conference on Advanced Signal Procesing Algorithms, Architectures and Implementations (pp. 637–645). Bellingham, WA: SPIE.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.