Abstract
We present the formal model and implementation of a computer-aided composition system allowing for the ‘composition of musical processes’. Rather than generating static data, this framework considers musical objects as dynamic structures likely to be updated and modified at any time. After formalizing a number of basic concepts, this paper describes the architecture of a framework comprising a scheduler, programming tools and graphical interfaces. The operation of this architecture, allowing to perform both regular and dynamic-processes composition, is explained through concrete musical examples.
Acknowledgements
We thank Jean-Louis Giavitto for his numerous suggestions on both formal and technical aspects. We also thank Hélianthe Caure, Julia Blondeau and Jérémie Garcia for their support.
Notes
Dimitri Bouche, UMR 9912 STMS: IRCAM - CNRS - UPMC, Paris, France.
1 In computer science, this phase is often divided between the sub-phases of planning and scheduling, where planning consists in finding an optimal path of operations to follow among numerous ones. However, there is often only one possible way to render a set of musical data. We consider that planning is implicitly performed during the computation of musical structures.
2 It also implicitly includes generation of new objects, since it can be modelled as the transformation of the null object.
3 This progression is bounded by the object duration: .
4 In our current implementation, ms and
.
5 Videos can be viewed at http://repmus.ircam.fr/efficace/wp/musical-processes.
6 Here, ‘musical output’ may not only refer to sounds or control messages, but also to the score display.