Abstract
This article introduces Multisensory Integrated Expressive Environments (MIEEs) as a framework for distributed active mixed reality environments for the performing arts, and discusses the structure of MIEEs and their global properties. Extended Multimodal Environments (EMEs) are introduced as basic components and described in detail, mainly with reference to what they contain – real and virtual objects, and real and virtual subjects. EMEs are then connected together into a network of spaces enabling geographically distributed performances. The concept of “active EMEs” is finally employed to introduce MIEEs as a hierarchical structure of metaspaces conceived as virtual subjects collaborating in achieving the overall narrative or aesthetic goal of the performance. Some examples of EMEs and MIEEs are also discussed.
Acknowledgments
I wish to thank the scientific director of the DIST-InfoMus Laboratory, Professor Antonio Camurri, and my colleagues Paolo Coletta, Alberto Massari, Barbara Mazzarino, Massimiliano Peri, Matteo Ricchetti, Andrea Ricci and Riccardo Trocca. I also thank colleagues from the partner institutions who worked in the EU-IST Project MEGA, which partially supported this research. Some more recent developments of this research have been partially supported by the EU-IST Project TAI-CHI (Tangible Acoustic Interfaces in Computer Human Interaction).
Notes
1Notice that at this point the metaspace will be usually considered as a virtual environment and a virtual subject, since it will not have an objective existence. Its “perceptions” and “actions” with respect to subjects/EMEs will not be physical (like, e.g., generation of audio/visual content in EMEs). Rather, metaspaces will act as software agents interacting with other software agents (the subjects/EMEs). However, sometimes it is also possible to find a physical counterpart of metaspaces (this will be discussed in an example later in this article).