82
Views
0
CrossRef citations to date
0
Altmetric
Research Article

The aesthetics of deconstruction: neural synthesis of transformation matrices using GANs on multichannel polyphonic MIDI data

&
Pages 245-262 | Received 13 Feb 2023, Accepted 24 Jan 2024, Published online: 16 Feb 2024

References

  • Assayag, G., Bloch, G., Chemilliera, M., Cont, A., & Dubnov, S. (2006). OMax brothers: A dynamic topology of agents for improvization learning. In AMCMM '06: Proceedings of the 1st ACM workshop on audio and music computing multimedia (pp. 125–132).
  • Audry, S. (2021). Art in the age of machine learning. MIT Press.
  • Bojchevski, A., Shchur, O., Zügner, D., & Günnemann, S. (2018). NetGAN: Generating graphs via random walks. In Proceedings of the 35th international conference on machine learning (PMLR 2018).
  • Briot, J. P. (2019). Deep learning techniques for music generation. Springer.
  • Bruna, J., Zaremba, W., Szlam, A., & Lecun, Y. (2014). Spectral networks and locally connected networks on graph. In International conference on learning representations.
  • Bussotti, S. (2022). Rara Requiem. https://www.schoyencollection.com/music-notation/graphic-notation/busotti-rara-requiem-ms-5270.
  • Cage, J. (2022). Ryoanji. https://johncage.org.
  • Caillon, A., & Esling, P. (2022). RAVE: A variational autoencoder for fast and high-quality neural audio synthesis. In The tenth international conference on learning representations (ICLR).
  • Carsault, T. (2017). Automatic chord extraction and musical structure prediction through semi-supervised learning: Application to human-computer improvisation. Master ATIAM, UPMC.
  • Crumb, G. (1970). Black angels. Peters Edition.
  • Dannenberg, R. B. (1993). Software support for interactive multimedia performance. Journal of New Music Research, 22(3), 213–228.
  • Deliège, C. (2012). 50 ans de modernité musicale. de darmstadt à l'ircam: contribution historiographique à une musicologie critique. Mardaga.
  • Dhariwal, P., Jun, H., Payne, C., Kim, J. W., Radford, A., & Sutskever, I. (2020). Jukebox: A generative model for music. CoRR.
  • Dong, H. W., Hsiao, W. Y., Yang, L. C., & Yang, Y. H. (2018). MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment. In The thirty-second AAAI conference on artificial intelligence.
  • Dubnov, S., Assayag, G., & El-Yaniv, R. (1998). Universal classification applied to musical sequences. In International computer music conference.
  • Ferneyhough, B. (1979). La terre est un homme. Peters Edition.
  • Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems 27 (NIPS 2014).
  • Kanach, S. (Ed.). (2006). Iannis xenakis: Musique de l'architecture. Editions Parenthèses.
  • Krawczuk, I., Abranches, P., Loukas, A., & Cevher, V. (2021). GG-GAN: A geometric graph generative adversarial network. In International conference on learning representations.
  • Kusters, W. (2020). A philosophy of madness: The experience of psychotic thinking. MIT Press.
  • Lapoujade, D. (2014). Deleuze, les mouvements aberrants. Editions de Minuit.
  • Leach, N. (2022). Architecture in the age of artificial intelligence: An introduction to AI for architects. Bloomsbury.
  • Lewis, G. E. (2000). Too many notes: Computers, complexity and culture in voyager. Leonardo Music Journal, 10, 33–39. https://doi.org/10.1162/096112100570585
  • Maresz, Y. (2013). On computer-assisted orchestration. Contemporary Music Review, 32(1), 99–109. https://doi.org/10.1080/07494467.2013.774515
  • McAdams, S., & Goodchild, M. (2022). A taxonomy of orchestral grouping effects derived from principles of auditory perception. Music Theory Online, 28(3), 55f.
  • Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux.
  • Nguyen, P., & Tsabary, E. (2022). Random walks on Neo-Riemannian spaces: Towards generative transformations. In Ai music creativity.
  • Nierhaus, G. (2009). Algorithmic composition: Paradigms of automated music generation. Springer.
  • Nika, J., Déguernel, K., Chemla-Romeu-Santos, A., & Vincent, E. (2017). DYCI2 agents: Merging the free, reactive, and scenario–based music generation paradigms. In International computer music conference.
  • Nistal, J., Aouameur, C., Lattner, S., & Richard, G. (2021). VQCPC-GAN: Variable-length adversarial audio synthesis using vector-quantized contrastive predictive coding. In IEEE Workshop on applications of signal processing to audio and acoustics (WASPAA).
  • Pachet, F., Roy, P., & Moreira, J. (2013). Reflexive loopers for solo musical improvisation. In Proceedings of the Sigchi conference on human factors in computing systems (pp. 2205–2208).
  • Parr, A. (2008). Deleuze and memorial culture: Desire, singular memory and the politics of trauma. Edinburgh University Press.
  • Pasini, M., & Schlüter, J. (2022). Musika! Fast infinite waveform music generation. In ISMIR 2022.
  • Ramesh, A., Dhariwal, P., Nichol, A., & Chen, M. (2022). Hierarchical text-conditional image generation with CLIP latents. CoRR.
  • Rutherford-Johnson, T. (2017). Music after the fall: Modern composition and culture since 1989. University of California Press.
  • Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E., Ghasemipour, S. K. S., Ayan, B. K., Mahdavi, S. S., Gontijo-Lopes, R., Salimans, T., Ho, J., Fleet, D. J., & Norouzi, M. (2022). Photorealistic text-to-image diffusion models with deep language understanding. CoRR.
  • Salter, C. (2010). Entangled: Technology and the transformation of performance. MIT Press.
  • Seth, A. (2017). Your brain hallucinates your conscious reality (TED Talk). https://www.ted.com/
  • Solomos, M. (Ed.). (2001). Présences de iannis xenakis. Centre de documentation de la musique contemporaine.
  • Solomos, M. (2015). Iannis xenakis, la musique électroacoustique: The electroacoustic music. Editions L'Harmattan.
  • Solomos, M. (Ed.). (2022). Révolutions xenakis. Les Editions de L'Oeil et Musée de la musique (Philharmonie de Paris).
  • Thom, B. (2000). BoB: An interactive improvisational music companion. In Proceedings of the fourth international conference on autonomous agents, agents '00 (pp. 309–316).
  • Tomasula, S. (2022). Conceptualisms: The anthology of prose, poetry, visual, found, e and hybrid writing as contemporary art. The University of Alabama Press.
  • Vitale, F. (2019). Last fortress of metaphysics, the: Jacques derrida and the deconstruction of architecture. State University of New York Press.
  • Walter, S., Mougeot, G., Sun, Y., Jiang, L., Chao, K. M., & Cai, H. (2021). MidiPGAN: A progressive GAN approach to MIDI generation. In 2021 IEEE 24th international conference on computer supported cooperative work in design (CSCWD) (pp. 1166–1171). https://doi.org/10.1109/CSCWD49262.2021.9437618
  • Wang, H., Wang, J., Wang, J., Zhao, M., Zhang, W., Zhang, F., Xie, X., & Guo, M. (2018). GraphGAN: Graph representation learning with generative adversarial nets. In The 32nd AAAI conference on artificial intelligence (AAAI 2018).
  • Wigley, M. (1993). The architecture of deconstruction: Derrida's haunt. The MIT Press.
  • Wu, L., Cui, P., Pei, J., & Zhao, L. (Eds.). (2022). Graph neural networks: Foundations, frontiers, and applications. Springer.
  • Xenakis, I. (1956). Pithoprakta. Boosey & Hawkes.
  • Xenakis, I. (1968). Syrmos. Boosey & Hawkes.
  • Yang, L. C., Chou, S. Y., & Yang, Y. H. (2017). Midinet: A convolutional generative adversarial network for symbolic-domain music generation. https://arxiv.org/abs/1703.10847. https://doi.org/10.48550/ARXIV.1703.10847
  • Young, M. (2007). NN music: Improvising with a ‘Living’ computer. In R. Kronland-Martinet, S. Ystad, & K. Jensen (Eds.), Computer music modeling and retrieval. Sense of sounds.
  • Yu, Y., Srivastava, A., & Canales, S. (2021). Conditional LSTM-GAN for melody generation from lyrics. ACM Transactions on Multimedia Computing, Communications, and Applications, 17(1), 1–20. https://doi.org/10.1145/3424116
  • Zizek, S. (2003). From virtual reality to the virtualization of reality. In T. Druckey (Ed.), Electronic culture: Technology and visual representation (pp. 290–295). Aperture.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.