58
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

Context-Based Method Using Bayesian Network in Multimodal Fission System

, &
Pages 1076-1090 | Received 04 May 2015, Accepted 27 Sep 2015, Published online: 13 Nov 2015

References

  • Thalmann, N. M., Thalmann, D., & Yumak, Z. (2014, November), Multimodal human-machine interaction including virtual humans or social robots, in SIGGRAPH Asia 2014 Courses (p. 14), ACM.
  • Jacko, J. A. (Ed.) (2012), Human Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications, CRC press.
  • Zaguia, Atef, Manolo Dulva Hina, Chakib Tadj et Amar Ramdane-Cherif, 2010b, “Using Multimodal Fusion in Accessing Web Services”, Journal of Emerging Trends in Computing and Information Sciences, vol. 1, no 2, p. 121–138.
  • Portillo, Pilar Manchón, Guillermo Pérez García et Gabriel Amores Carredano, 2006, “Multimodal fusion: a new hybrid strategy for dialogue systems”, in Proceedings of the 8th international conference on Multimodal interfaces (Banff, Alberta, Canada), p. 357–363.
  • Zaguia, Atef, Ahmad Wahbi, Chakib Tadj et Amar Ramdane-Cherif, 2013b, “Multimodal Fission For Interaction Architecture”, Journal of Emerging Trends in Computing and Information Sciences, vol. 4, no 1.
  • Zaguia, Atef, Ahmad Wahbi, Moeiz Miraoui, Chakib Tadj et Amar Ramdane-Cherif, 2013a, “Modeling Rules Fission and Modality Selection Using Ontology”, Journal of Software Engineering and Applications, vol. 7, no 6, p. 354–371.
  • Costa, David, and Carlos Duarte, 2011, “Adapting Multimodal Fission to User's Abilities”, in Universal Access in Human-Computer Interaction, Design for All and eInclusion, sous la dir. de Stephanidis, Constantine. Vol. 6765, p. 347–356. Coll. “Lecture Notes in Computer Science “: Springer Berlin Heidelberg.<http://dx.doi.org/10.1007/978-3-642-21672-5_38 >.
  • Bolt, R., 1980, “Put-that-there”, Voice and gesture at the graphics interface, ACM SIGGRAPH Computer Graphics, vol. 14, no 3, p. 262–270.
  • Jacob, Mithun George, Yu-Ting Li et Juan P Wachs, 2012, “Gestonurse: a multimodal robotic scrub nurse”, in Proceedings of the seventh annual ACM/IEEE internat- ional conference on Human-Robot Interaction, p. 153–154. ACM.
  • Nordahl, Rolf, Stefania Serafin, Luca Turchet et Niels Christian Nilsson, 2012, “A multimodal architecture for simulating natural interactive walking in virtual environ- ments”, PsychNology, vol. 9, no 3, p. 245–268.
  • Oviatt, S., P. Cohen, Lizhong Wu, J. Vergo, L. Duncan, B. Suhm, J. Bers, T. Holzman, T. Winograd, J. Landay, J. Larson and D. Ferro, 2000, “Designing the user interface for multimodal speech and pen-based gesture applications: state-of-the-art systems and future research directions”, Human-Computer Interaction, vol. 15, no 4, p. 263–322.
  • Meng, H., S. Oviatt, G. Potamianos and G. Rigoll, 2009, “Introduction to the Special Issue on Multimodal Pro- cessing in Speech-Based Interactions”, IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no 3, p. 409–410.
  • Poller, Peter, et Valentin Tschernomas, 2006, “Multimodal Fission and Media Design”, in SmartKom: Foundations of Multimodal Dialogue Systems, edited by Wahlster, Wolfgang. Springer Berlin Heidelberg.
  • Atrey, Pradeep K, M Anwar Hossain, Abdulmotaleb El Saddik and Mohan S Kankanhalli, 2010, “Multimodal fusion for multimedia analysis: a survey”, Multimedia Systems, vol. 16, no 6, p. 345–379.
  • Nguyen, Laurent, Jean-Marc Odobez and Daniel Gatica- Perez, 2012, “Using self-context for multimodal detection of head nods in face-to-face interactions”, in Proceedings of the 14th ACM international conference on Multimodal interaction (Santa Monica, California, USA), p. 289–292, 2388734: ACM.
  • Perroud, Didier, Leonardo Angelini, Omar Abou Khaled and Elena Mugellini, 2012, “Context-Based Generation of Multimodal Feedbacks for Natural Interaction in Smart Environments”, in AMBIENT 2012, The Second Internat- ional Conference on Ambient Computing, Applications, Services and Technologies, p. 19–25.
  • Benoit, Alexandre, Laurent Bonnaud, Alice Caplier, Phillipe Ngo, Lionel Lawson, Daniela G. Trevisan, Vjekoslav Levacic, Céline Mancas and Guillaume Chanel, 2009, “Multimodal focus attention and stress detection and feedback in an augmented driver simulator”, Personal and Ubiquitous Computing, vol. 13, no 1.
  • Palanque, Philippe, and Amélie Schyn, 2003, “A Model- Based Approach for Engineering Multimodal Interactive Systems”, in 9th IFIP TC13 Int. Conf. on Human- Computer Interaction, IOS Press.
  • Oviatt, S., and Seattle Incaa Designs, 2007, “Implicit user-adaptive system engagement in speech, pen and multimodal interfaces”, in Automatic Speech Recognition & Understanding, 2007, IEEE Workshop on ASRU (Kyoto), p. 496–501, IEEE.
  • Henry, Tyson R., Scott E. Hudson and Gary L. Newell, 1990, “Integrating gesture and snapping into a user interface toolkit”, in Proceedings of the 3rd annual ACM SIGGRAPH symposium on User interface software and technology (Snowbird, Utah, USA), p. 112–122. 97938: ACM.
  • Zaguia, Atef, Manolo Dulva Hina, Chakib Tadj and Amar Ramdane-Cherif, 2010a, “Interaction context-aware modalities and multimodal fusion for acessing web services”, Ubiquitous Computing and Communication Journal, vol. 5, no 4.
  • Alexander, Christopher, S Ishikawa and M Silverstein, 1977, “Pattern languages”, Center for Environmental Structure, vol. 2.
  • Zhang, Lei, and Qiang Ji, “A Bayesian network model for automatic and interactive image segmentation”, IEEE Transactions on Image Processing, 20.9 (2011): 2582–2593.
  • Weber, P., Medina-Oliva, G., Simon, C., and Iung, B., (2012), Overview on Bayesian networks applications for dependability, risk analysis and maintenance areas, Engineering Applications of Artificial Intelligence, 25(4), 671–682.
  • Zaguia, A., Tadj, C., and Ramdane-Cherif, A., (2015), Prototyping using a Pattern Technique and a Context- Based Bayesian Network in Multimodal Systems, International Journal on Smart Sensing & Intelligent Systems, 8(3).
  • Friedman, Nir, Iftach Nachman and Dana Peér, 1999, “Learning bayesian network structure from massive datasets: the ‘sparse candidate’ algorithm”, in Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, p. 206–215, Morgan Kaufmann Publishers Inc.
  • Jensen, Kurt, 1987, “Coloured Petri nets Petri Nets: Central Models and Their Properties”, in, edited by Brauer, W., W. Reisig and G. Rozenberg. Vol. 254, p. 248–299, Coll. “Lecture Notes in Computer Science”, Springer Berlin / Heidelberg. <http://dx.doi.org/10.1007/BFb0046842 >
  • CPN-Tools, 2012, “CPN-Tools”,<http://cpntools.org/ >.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.