123
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Multimodal Interaction Grammar Analysis Based on Two-Stage User-Based Elicitation in 3D Modeling

, &
Pages 2120-2141 | Received 30 Nov 2022, Accepted 25 Sep 2023, Published online: 16 Oct 2023

References

  • Alkemade, R., Verbeek, F. J., & Lukosch, S. G. (2017). On the efficiency of a VR hand gesture-based interface for 3D object manipulations in conceptual design. International Journal of Human–Computer Interaction, 33(11), 882–901. https://doi.org/10.1080/10447318.2017.1296074
  • Anderson, L., Esser, J., & Interrante, V. (2003). A virtual environment for conceptual design in architecture [Paper presentation]. Proceedings of the Workshop on Virtual Environments (pp. 57–63). https://doi.org/10.1145/769953.769960
  • Baudisch, P., Silber, A., Kommana, Y., Gruner, M., Wall, L. W., Reuss, K., Heilman, L., Kovács, R., Rechlitz, D., & Roumen, T. J. (2019). Kyub Editor for modeling sturdy laser-cut objects [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, (pp. 1–12). https://doi.org/10.1145/3290605.3300796
  • Bolt, R. A. (1980). ‘Put-that-there’: Voice and gesture at the graphics interface. ACM SIGGRAPH Computer Graphics, 14(3), 262–270. https://doi.org/10.1145/965105.807503
  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T. J., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., & D., Amodei. (2020). Language models are few-shot learners. NIPS'20: 34th International Conference on Neural Information Processing Systems (p. 14165).
  • Caputo, F. M., Emporio, M., & Giachetti, A. (2018). The Smart Pin: An effective tool for object manipulation in immersive virtual reality environments. Computers & Graphics, 74, 225–233. https://doi.org/10.1016/j.cag.2018.05.019
  • Chatterjee, I., Xiao, R., & Harrison, C. (2015). Gaze + gesture: Expressive, precise and targeted free-space interactions [Paper presentation]. Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (pp. 131–138). ACM. https://doi.org/10.1145/2818346.2820752
  • Chen, Z., Ma, X., Peng, Z., Zhou, Y., Yao, M., Ma, Z., Wang, C., Gao, Z., & Shen, M. (2018). User-defined gestures for gestural interaction: Extending from hands to other body parts. International Journal of Human–Computer Interaction, 34(3), 238–250. https://doi.org/10.1080/10447318.2017.1342943
  • Chu, C. P., Dani, T. H., & Gadh, R. (1997). Multi-sensory user interface for a virtual-reality-based computeraided design system. Computer-Aided Design, 29(10), 709–725. https://doi.org/10.1016/S0010-4485(97)00021-3
  • Cicognani, A. (1997). Design speech acts: “how to do things with words” in virtual communities. ACM SIGGRAPH 97 Visual Proceedings: The Art and Interdisciplinary Programs of SIGGRAPH (pp. 136). https://doi.org/10.1145/259081.259222
  • Clay, S. R., & Wilhelms, J. (1996). Put: Language-based interactive manipulation of objects. IEEE Computer Graphics and Applications, 16(2), 31–39. https://doi.org/10.1109/38.486678
  • Dani, T. H., & Gadh, R. (1997). Creation of concept shape designs via a virtual reality interface. Computer-Aided Design, 29(8), 555–563. https://doi.org/10.1016/S0010-4485(96)00091-7
  • Debie, E., Fernandez Rojas, R. F., Fidock, J., Barlow, M., Kasmarik, K., Anavatti, S., Garratt, M., & Abbass, H. A. (2021). Multimodal fusion for objective assessment of cognitive workload: A review. IEEE Transactions on Cybernetics, 51(3), 1542–1555. https://doi.org/10.1109/TCYB.2019.2939399
  • Dim, N. K., Silpasuwanchai, C., Sarcar, S., & Ren, X. (2016). Designing mid-air TV gestures for blind people using user- and choice-based elicitation approaches [Paper presentation]. Proceedings of the 2016 ACM Conference on Designing Interactive Systems (pp. 204–214). ACM. https://doi.org/10.1145/2901790.2901834
  • Drey, T., Gugenheimer, J., Karlbauer, J., Milo, M., & Rukzio, E. (2020). VRSketchIn: Exploring the design space of pen and tablet interaction for 3D sketching in virtual reality [Paper presentation]. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–14). https://doi.org/10.1145/3313831.3376628
  • Farrugia, P. J., Camilleri, K. P., & Borg, J. C. (2014). A language for representing and extracting 3D geometry semantics from paper-based sketches. Journal of Visual Languages & Computing, 25(5), 602–624. https://doi.org/10.1016/j.jvlc.2014.08.001
  • Good, M. D., Whiteside, J. A., Wixon, D. R., & Jones, S. J. (1984). Building a user-derived interface. Communications of the ACM, 27(10), 1032–1043. https://doi.org/10.1145/358274.358284
  • Hoff, L., Hornecker, E., & Bertel, S. (2016). Modifying gesture elicitation: Do kinaesthetic priming and increased production reduce legacy bias? [Paper presentation]. Proceedings of the TEI’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction (pp. 86–91). https://doi.org/10.1145/2839462.2839472
  • Hoffmann, F., Tyroller, M. I., & Wende, F. (2019). User-defined interaction for smart homes: Voice, touch, or mid-air gestures?. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia – MUM ’19 (pp. 1–7). https://doi.org/10.1145/3365610.3365624
  • Holz, C., & Wilson, A. D. (2011). Data miming: Inferring spatial object descriptions from human gesture [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 811–820). https://doi.org/10.1145/1978942.1979060
  • Hou, W., Feng, G., & Cheng, Y. (2019). A fuzzy interaction scheme of mid-air gesture elicitation. Journal of Visual Communication and Image Representation, 64(9), 102637. https://doi.org/10.1016/j.jvcir.2019.102637
  • Huang, J., Jaiswal, P., & Rai, R. (2019). Gesture-based system for next generation natural and intuitive interfaces. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 33(1), 54–68. https://doi.org/10.1017/S0890060418000045
  • Huo, K., Vinayak., & Ramani, K. (2017). Window-shaping: 3D design ideation by creating on, borrowing from, and looking at the physical world [Paper presentation]. Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction (pp. 37–45). https://doi.org/10.1145/3024969.3024995
  • Igarashi, T., Matsuoka, S., & Tanaka, H. (1999). Teddy: A sketching interface for 3D freeform design [Paper presentation]. Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques (pp. 409–416). https://doi.org/10.1145/311535.311602
  • Jacob, R. J. K., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., & Zigelbaum, J. (2008). Reality-based interaction: A framework for post-WIMP interfaces [Paper presentation]. Proceeding of the Twenty-Sixth annual SIGCHI, Conference on Human Factors in Computing Systems (pp. 201–210). https://doi.org/10.1145/1357054.1357089
  • Khan, S. A., & Tunçer, B. (2019). Gesture and speech elicitation for 3D CAD modeling in conceptual design. Automation in Construction, 106, 102847. https://doi.org/10.1016/j.autcon.2019.102847
  • Khan, S. A., Tuncer, B., Subramanian, R., Blessing, L. (2019). 3D CAD modeling using gestures and speech: Investigating CAD legacy and non-legacy procedures. Hello, culture–Proceeding of the 18th International Conference on Computer Aided Architectural Design Futures (CAAD Futures 2019). https://www.researchgate.net/publication/337673116_3D_CAD_modeling_using_gestures_and_speech_Investigating_CAD_legacy_and_non-legacy_procedures.
  • Kou, X. Y., Xue, S. K., & Tan, S. T. (2010). Knowledge-guided inference for voice-enabled CAD. Computer-Aided Design, 42(6), 545–557. https://doi.org/10.1016/j.cad.2010.02.002
  • LaViola, J. J., & Keefe, D. F. (2011). 3D spatial interaction: Applications for art, design, and science [Paper presentation]. ACM Siggraph (pp. 1–75). https://doi.org/10.1145/2037636.2037637
  • Lemmelä, S., Vetek, A., Mäkelä, K., & Trendafilov, D. (2008). Designing and evaluating multimodal interaction for mobile contexts [Paper presentation]. Proceedings of the 10th International Conference on Multimodal Interfaces (pp. 265–272). https://doi.org/10.1145/1452392.1452447
  • Mark, D. M. (1993). Human spatial cognition. In D. Medyckyj-Scott & H. M. Hearnshaw (Eds.). Human factors in geographical information systems (pp. 51–60). Belhaven Press. http://www.acsu.buffalo.edu/∼dmark/DMScottchapter.html.
  • Mentis, H. M., O’Hara, K., Gonzalez, G., Sellen, A., Corish, R., Criminisi, A., Trivedi, R., & Theodore, P. (2015). Voice or gesture in the operating room [Paper presentation]. Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (pp. 773–780). ACM. https://doi.org/10.1145/2702613.2702963
  • Morris, M. R., Danielescu, A., Drucker, S. M., Fisher, D., Lee, B., Schraefel, M. M., & Wobbrock, J. O. (2014). Reducing legacy bias in gesture elicitation studies. Interactions, 21(3), 40–45. https://doi.org/10.1145/2591689
  • Nanjundaswamy, V. G., Kulkarni, A., Chen, Z., Jaiswal, P., Shankar, S. S., Verma, A. P., & Rai, R. (2013). Intuitive 3D computer-aided design (CAD) system with multimodal interfaces [Paper presentation]. Proceedings of the ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. https://doi.org/10.1115/DETC2013-12277
  • Nielsen, M., Störring, M., Moeslund, T. B., & Granum, E. (2004). A procedure for developing intuitive and ergonomic gesture interfaces for HCI. Lecture Notes in Computer Science, 17(17), 409–420. https://doi.org/10.1007/978-3-540-24598-8_38
  • Nishino, H., Utsumiya, K., & Korida, K. (1998). 3D object modeling using spatial and pictographic gestures [Paper presentation]. Proceedings of the ACM Symposium on Virtual Reality Software and Technology (pp. 51–58). ACM. https://doi.org/10.1145/293701.293708
  • Oviatt, S. L. (1999). Ten myths of multimodal interaction. Communications of the ACM, 42(11), 74–81. https://doi.org/10.1145/319382.319398
  • Pavlovic, V. I., Sharma, R., & Huang, T. S. (1997). Visual interpretation of hand gestures for human-computer interaction: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 677–695. https://doi.org/10.1109/34.598226
  • Pratini, E. (2001). New approaches to 3D gestural modeling – The 3D SketchMaker project [Paper presentation]. Proceedings of 19th eCAADe Conference Proceedings (pp. 466–471). https://doi.org/10.52842/conf.ecaade.2001.466
  • Robertson, B. F., & Radcliffe, D. F. (2009). Impact of CAD tools on creative problem solving in engineering design. Computer-Aided Design, 41(3), 136–146. https://doi.org/10.1016/j.cad.2008.06.007
  • Ruiz, J., Li, Y., & Lank, E. (2011). User-defined motion gestures for mobile interaction [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, (pp. 197–206). https://doi.org/10.1145/1978942.1978971
  • Ryu, K., Lee, J., & Park, J. M. (2019). GG Interaction: A gaze–grasp pose interaction for 3D virtual object selection. Journal on Multimodal User Interfaces, 13(4), 383–393. https://doi.org/10.1007/s12193-019-00305-y
  • Schkolne, S., Pruett, M., & Schröder, P. (2001). Surface drawing: Creating organic 3D shapes with the hand and tangible tools [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 261–268). https://doi.org/10.1145/365024.365114
  • Silpasuwanchai, C., & Ren, X. (2014). Jump and shoot!: Prioritizing primary and alternative body gestures for intense gameplay [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 951–954). https://doi.org/10.1145/2556288.2557107
  • Tao, J. h., Wu, Y. c., Yu, C., Weng, D. d., Li, G. j., Han, T., Wang, Y. t., & Liu, B. (2022). A Survey on Multi-Modal Human-Computer Interaction. Journal of Image and Graphics, 27(6), 1956–1987.
  • Thakur, A. V., & Rai, R. (2015). User study of hand gestures for gesture based 3D CAD modeling [Paper presentation]. Proceedings of the ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. https://doi.org/10.1115/DETC2015-46086
  • Vinayak & Ramani, K. (2016). Extracting hand grasp and motion for intent expression in mid-air shape deformation: A concrete and iterative exploration through a virtual pottery application. Computers and Graphics, 55, 143–156. https://doi.org/10.1016/j.cag.2015.10.012
  • Vinayak, S., Murugappan, S., Liu, H., & Ramani, K. (2013). Shape-it-up: Hand gesture based creative expression of 3D shapes using intelligent generalized cylinders. Computer-Aided Design, 45(2), 277–287. https://doi.org/10.1016/j.cad.2012.10.011
  • Wobbrock, J. O., Aung, H. H., Rothrock, B., & Myers, B. A. (2005). Maximizing the guessability of symbolic input [Paper presentation]. CHI 2005 Extended Abstracts on Human Factors in Computing Systems (pp. 1869–1872). https://doi.org/10.1145/1056808.1057043
  • Wobbrock, J. O., Morris, M. R., & Wilson, A. D. (2009). User-defined gestures for surface computing [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1083–1092). https://doi.org/10.1145/1518701.1518866
  • Wu, H., Wang, Y., Qiu, J., Liu, J., & Zhang, X. L. (2019). User-defined gesture interaction for immersive VR shopping applications. Behaviour & Information Technology, 38(7), 726–741. https://doi.org/10.1080/0144929X.2018.1552313
  • Wu, S. (2022). Studies on multimodal construction grammar: Theoretical motivation, research framework, and prospects. Journal Beijing International Studies University, 44(2), 96–108. https://doi.org/10.12002/j.bisu.385
  • Yang, S., & Pan, Y. (2014). A study on methods of multimodal interaction in vehicle based on wheel gestures and voices. In C. Stephanidis (Ed.), HCI International 2014 - Posters’ Extended Abstracts. HCI 2014. Communications in Computer and Information Science (vol 434). Springer. https://doi.org/10.1007/978-3-319-07857-1_85
  • Zeleznik, R. C., Herndon, K. P., & Hughes, J. F. (2007). SKETCH: An interface for sketching 3D scenes [Paper presentation]. ACM SIGGRAPH 2007 Courses (pp. 19). https://doi.org/10.1145/1281500.1281530
  • Zheng, J. M., Chan, K. W., & Gibson, I. (2001). Desktop virtual reality interface for computer aided conceptual design using geometric techniques. Journal of Engineering Design, 12(4), 309–329. https://doi.org/10.1080/09544820110085931

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.