Abstract
This paper proposes to use the techniques of Concatenative Sound Synthesis in the context of real-time Music Interaction. We describe a system that generates an audio track by concatenating audio segments extracted from pre-existing musical files. The track can be controlled in real-time by specifying high-level properties (or constraints) holding on metadata about the audio segments. A constraint-satisfaction mechanism, based on local search, selects audio segments that best match those constraints at any time. We describe the real-time aspects of the system, notably the asynchronous adding/removing of constraints, and report on several constraints and controllers designed for the system. We illustrate the system with several application examples, notably a virtual drummer able to interact with a human musician in real-time.
Acknowledgements
This work gratefully inherits ideas, bits and pieces about adaptive search from Philippe Codognet, musaicing from Aymeric Zils & Anthony Beurivé and Object-Oriented CSP from Pierre Roy. It uses JSyn, a Java synthesis library (Burk, Citation1998), and MIDIShare, a real-time multi-task MIDI operating system developed by GRAME (Orlarey & Lequay, Citation1989). It has been partially funded by the SemanticHifi European IST project.
Notes
1“An automatic Ringo Starr.”
2We define here “autonomy” not as an intrinsic self-motivation of the system—which lacks any kind of emerging behaviour—but as its ability to preserve a predefined set of global constraints.