Abstract
In this article, we discuss physical gestures from the perspective of computer-aided composition (CAC) and propose solutions for the integration of gesture signals as musical materials into these environments. After presenting background and motivations, we review related works and describe design guidelines for our developments. A particular focus is given to the idiosyncrasies of working with gestures in real-time vs differed-time paradigms. We present an implementation of our concepts as the library OM-Geste, integrated into the CAC environment OpenMusic. Two case studies are presented for the generation of a symbolic musical score and spatial audio synthesis from gesture recordings of dance and instrumental performances.
Acknowledgements
We would like to express our thanks to Kristian Nymoen and Sophie Breton for sharing data of their performances. Thanks to Sean Ferguson for insightful discussions and to Isabelle van Grimde for permission to use excerpts of her choreography. The first author would like to thank Jean Bresson for support with design and implementation of the library, and David Hofmann for proofreading of the article.
Notes
1 Indeed, some researchers consider the term ‘gesture’ misleading and avoid its use at all (Jensenius, Citation2007)—a notion that, if applied to other research terms, would likely dramatically reduce our vocabulary.
2 The term ‘morphology’ is used to denote a characteristic structure or form of multiple signals, independent of the types of signals themselves: http://www.merriam-webster.com/dictionary/morphology.
3 UPIC is an acronym for ‘Unit Polyagogique Informatique du CeMaMu’.
4 A list of descriptors can be found online: http://xdif.wiki.ifi.uio.no/Data_types.
5 See also Schumacher (Citation2016) for a similar discussion on representations of audio signals.
6 Indeed, there are many historic examples for mapping in composition, spanning from medieval to contemporary music practices (Doornbusch, Citation2010).
7 This hierarchical structure is similar to the ‘internal representation’ of Ramstein and Cadoz, in which units contain channels containing lanes (Cadoz & Ramstein, Citation1990).
8 Troughs in acceleration data have been found indicative of points of demarcation between gestures (Bevilacqua et al., Citation2002).
9 OM’s default quantisation algorithm groups individual notes falling into a time window of 100 ms into chords.
11 In the original mapping for the recording, audio amplitude correlates with absolute velocity of the SoundSaber’s tip.