2,710
Views
5
CrossRef citations to date
0
Altmetric
Research Articles

Context-Adaptive Visual Cues for Safe Navigation in Augmented Reality Using Machine Learning

Pages 761-781 | Received 10 May 2022, Accepted 31 Aug 2022, Published online: 22 Sep 2022

References

  • Abowd, G. D., Dey, A. K., Brown, P. J., Davies, N., Smith, M., Steggles, P. (1999). Towards a Better Understanding of Context and Context-Awareness. In H. W. Gellersen, (eds), Handheld and Ubiquitous Computing. HUC 1999: Vol. 1707. Lecture Notes in Computer Science (pp. 304–307). Springer. https://doi.org/10.1007/3-540-48157-5_29
  • Alghofaili, R., Sawahata, Y., Huang, H., Wang, H. C., Shiratori, T., & Yu, L. F. (2019). Lost in style: Gaze-driven adaptive aid for VR navigation. In Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3290605.3300578
  • Arntz, A. et al. (2020). Navigating a Heavy Industry Environment Using Augmented Reality-A Comparison of Two Indoor Navigation Designs. In J. Y. C. Chen, & G. Fragomeni, (eds), Virtual, Augmented and Mixed Reality. Industrial and Everyday Life Applications. HCII 2020: Vol 12191. Lecture Notes in Computer Science (pp. 3–18). Springer. https://doi.org/10.1007/978-3-030-49698-2_1
  • Atzigen, M. v., Liebmann, F., Hoch, A., Bauer, D. E., Snedeker, J. G., Farshad, M., & Fürnstahl, P. (2021). HoloYolo: A proof of concept study for marker less surgical navigation of spinal rod implants with augmented reality and on device machine learning. The International Journal of Medical Robotics + Computer Assisted Surgery, 17(1), 1–10. https://doi.org/10.1002/rcs.2184
  • Baldauf, M., Dustdar, S., & Rosenberg, F. (2007). A survey on context-aware systems. International Journal of Ad Hoc and Ubiquitous Computing, 2(4), 263. https://doi.org/10.1504/IJAHUC.2007.014070
  • Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the system usability scale. International Journal of Human–Computer Interaction, 24(6), 574–594. https://doi.org/10.1080/10447310802205776
  • Baudisch, P., & Rosenholtz, R. (2003). Halo: A technique for visualizing off-screen objects. In Conference on Human Factors in Computing Systems (pp. 481–488). https://doi.org/10.1145/642611.642695
  • Bettini, C., Brdiczka, O., Henricksen, K., Indulska, J., Nicklas, D., Ranganathan, A., & Riboni, D. (2010). A survey of context modelling and reasoning techniques. Pervasive and Mobile Computing, 6(2), 161–180. https://doi.org/10.1016/j.pmcj.2009.06.002
  • Biocca, F., Tang, A., Owen, C., & Xiao, F. (2006). Attention funnel: Omnidirectional 3D cursor for mobile augmented reality platforms. In Conference on Human Factors in Computing Systems (pp. 1115–1122). https://doi.org/10.1145/1124772.1124939
  • Bolton, A., Burnett, G., & Large, D. R. (2015). An investigation of augmented reality presentations of landmark-based navigation using a head-up display. In Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 56–63). https://doi.org/10.1145/2799250.2799253
  • Burova, A., Mäkelä, J., Hakulinen, J., Keskinen, T., Heinonen, H., Siltanen, S., & Turunen, M. (2020). Utilizing VR and gaze tracking to develop AR solutions for industrial maintenance. In Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376405
  • Büttner, S., Funk, M., Sand, O., & Röcker, C. (2016). Using head-mounted displays and in-situ projection for assistive systems: A comparison. In Conference on Pervasive Technologies Related to Assistive Environments. https://doi.org/10.1145/2910674.2910679
  • Cruz, E., Orts-Escolano, S., Gomez-Donoso, F., Rizo, C., Rangel, J. C., Mora, H., & Cazorla, M. (2019). An augmented reality application for improving shopping experience in large retail stores. Virtual Reality, 23(3), 281–291. https://doi.org/10.1007/s10055-018-0338-3
  • David-John, B., Peacock, C., Zhang, T., Murdison, T. S., Benko, H., & Jonker, T. R. (2021). Towards gaze-based prediction of the intent to interact in virtual reality. In Eye Tracking Research and Applications Symposium (ETRA). https://doi.org/10.1145/3448018.3458008
  • Dijkstra, E. W. (1959). A note on two problems in connexion with graphs. Numerische Mathematik, 1(1), 269–271. https://doi.org/10.1007/BF01386390
  • Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning (No. 1702.08608v2). http://arxiv.org/abs/1702.08608
  • Dozat, T. (2016). Incorporating Nesterov momentum into Adam. In ICLR Workshop.
  • Flatt, H., Koch, N., Rocker, C., Gunter, A., & Jasperneite, J. (2015). A context-aware assistance system for maintenance applications in smart factories based on augmented reality and indoor localization. In Conference on Emerging Technologies & Factory Automation (ETFA). https://doi.org/10.1109/ETFA.2015.7301586
  • Fluss, R., Faraggi, D., & Reiser, B. (2005). Estimation of the Youden index and its associated cutoff point. Biometrical Journal. Biometrische Zeitschrift, 47(4), 458–472. https://doi.org/10.1002/bimj.200410135
  • Grubert, J., Langlotz, T., Zollmann, S., & Regenbrecht, H. (2017). Towards pervasive augmented reality: Context-awareness in augmented reality. IEEE Transactions on Visualization and Computer Graphics, 23(6), 1706–1724. https://doi.org/10.1109/TVCG.2016.2543720
  • Gruenefeld, U., El Ali, A., Heuten, W., & Boll, S. (2017). Visualizing out-of-view objects in head-mounted augmented reality. In Conference on Human–Computer Interaction with Mobile Devices and Services (pp. 1–7). https://doi.org/10.1145/3098279.3122124
  • Gruenefeld, U., Ennenga, D., Ali, A. E., Heuten, W., & Boll, S. (2017). EyeSee360: Designing a visualization technique for out-of-view objects in head-mounted augmented reality. In Symposium on Spatial User Interaction (pp. 109–118). https://doi.org/10.1145/3131277.3132175
  • Gruenefeld, U., Lange, D., Hammer, L., Boll, S., & Heuten, W. (2018). FlyingARrow: Pointing towards out-of-view objects on augmented reality devices. In International Symposium on Pervasive Displays (Vol. 18, pp. 1–6). https://doi.org/10.1145/3205873.3205881
  • Guo, A., Wu, X., Shen, Z., Starner, T., Baumann, H., & Gilliland, S. (2015). Order picking with head-up displays. Computer Magazine. 48(6), 16–24. https://doi.org/10.1109/MC.2015.166
  • Gustafson, S., Baudisch, P., Gutwin, C., & Irani, P. (2008). Wedge: Clutter-free visualization of off-screen locations. In Conference on Human Factors in Computing Systems (pp. 787–796). https://doi.org/10.1145/1357054.1357179
  • Hanson, R., Falkenström, W., & Miettinen, M. (2017). Augmented reality as a means of conveying picking information in kit preparation for mixed-model assembly. Computers & Industrial Engineering, 113(April), 570–575. https://doi.org/10.1016/j.cie.2017.09.048
  • Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in Psychology, 52(C), 139–183. https://doi.org/10.1016/S0166-4115(08)62386-9
  • Jeffri, N. F. S., & Rambli, D. R. A. (2020). Guidelines for the interface design of AR systems for manual assembly. In Proceedings of the 2020 International Conference on Virtual and Augmented Reality Simulations (pp. 70–77). https://doi.org/10.1145/3385378.3385389
  • Katić, D., Spengler, P., Bodenstedt, S., Castrillon-Oberndorfer, G., Seeberger, R., Hoffmann, J., Dillmann, R., & Speidel, S. (2015). A system for context-aware intraoperative augmented reality in dental implant surgery. International Journal of Computer Assisted Radiology and Surgery, 10(1), 101–108. https://doi.org/10.1007/s11548-014-1005-0
  • Kim, S., Nussbaum, M. A., & Gabbard, J. L. (2016). Augmented reality “smart glasses” in the workplace: Industry perspectives and challenges for worker safety and health. IIE Transactions on Occupational Ergonomics and Human Factors, 4(4), 253–258. https://doi.org/10.1080/21577323.2016.1214635
  • Kim, S., Nussbaum, M. A., & Gabbard, J. L. (2019). Influences of augmented reality head-worn display type and user interface design on performance and usability in simulated warehouse order picking. Applied Ergonomics, 74(1), 186–193. https://doi.org/10.1016/j.apergo.2018.08.026
  • Knopp, S., Klimant, P., Schaffrath, R., Voigt, E., Fritzsche, R., & Allmacher, C. (2019). HoloLens AR – Using Vuforia-based marker tracking together with text recognition in an assembly scenario. In International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 63–64). https://doi.org/10.1109/ISMAR-Adjunct.2019.00030
  • Krupenia, S., & Sanderson, P. M. (2006). Does a head-mounted display worsen inattentional blindness? Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50(16), 1638–1642. https://doi.org/10.1177/154193120605001626
  • Lampen, E., Lehwald, J., & Pfeiffer, T. (2020). A context-aware assistance framework for implicit interaction with an augmented human. In HCII 2020: Virtual, Augmented and Mixed Reality. Industrial and Everyday Life Applications (pp. 91–110). https://doi.org/10.1007/978-3-030-49698-2_7
  • Lampen, E., Teuber, J., Gaisbauer, F., Bär, T., Pfeiffer, T., & Wachsmuth, S. (2019). Combining simulation and augmented reality methods for enhanced worker assistance in manual assembly. Procedia CIRP, 81(3), 588–593. https://doi.org/10.1016/j.procir.2019.03.160
  • Lange, D., Stratmann, T. C., Gruenefeld, U., & Boll, S. (2020). HiveFive: Immersion preserving attention guidance in virtual reality. In Conference on Human Factors in Computing Systems (pp. 1–13). Virtual Conference. https://doi.org/10.1145/3313831.3376803
  • Le, H., Nguyen, M., Yan, W. Q., & Nguyen, H. (2021). Augmented reality and machine learning incorporation using YOLOv3 and ARKit. Applied Sciences, 11(13), 6006. https://doi.org/10.3390/app11136006
  • Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., & Talwalkar, A. (2018). Hyperband: A novel bandit-based approach to hyperparameter optimization. Journal of Machine Learning Research, 18(1), 1–52. https://doi.org/10.48550/arXiv.1603.06560
  • Liu, D., Jenkins, S. A., Sanderson, P. M., Watson, M. O., Leane, T., Kruys, A., & Russell, W. J. (2009). Monitoring with head-mounted displays: Performance and safety in a full-scale simulator and part-task trainer. Anesthesia & Analgesia, 109(4), 1135–1146. https://doi.org/10.1213/ANE.0b013e3181b5a200
  • Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4768–4777).
  • Murauer, N., Schön, D., Müller, F., Pflanz, N., Günther, S., & Funk, M. (2018). An analysis of language impact on augmented reality order picking training. In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference (pp. 351–357). https://doi.org/10.1145/3197768.3201570
  • Naritomi, S., & Yanai, K. (2020). CalorieCaptorGlass: Food calorie estimation based on actual size using HoloLens and deep learning. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (pp. 818–819). https://doi.org/10.1109/VRW50115.2020.00260
  • Petersen, N., & Stricker, D. (2015). Cognitive augmented reality. Computers and Graphics, 53(8), 82–91. https://doi.org/10.1016/j.cag.2015.08.009
  • Pfeuffer, K., Abdrabou, Y., Esteves, A., Rivu, R., Abdelrahman, Y., Meitner, S., Saadi, A., & Alt, F. (2021). ARtention: A design space for gaze-adaptive user interfaces in augmented reality. Computers & Graphics, 95(2), 1–12. https://doi.org/10.1016/j.cag.2021.01.001
  • Renner, P., & Pfeiffer, T. (2017). Attention guiding techniques using peripheral vision and eye tracking for feedback in augmented-reality-based assistance systems. In 2017 IEEE Symposium on 3D User Interfaces (3DUI) (pp. 186–194). https://doi.org/10.1109/3DUI.2017.7893338
  • Renner, P., & Pfeiffer, T. (2020). AR-glasses-based attention guiding for complex environments. In Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments (pp. 1–10). https://doi.org/10.1145/3389189.3389198
  • Saha, D. P., Knapp, R. B., & Martin, T. L. (2017). Affective feedback in a virtual reality based intelligent supermarket. In Adjunct Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers (pp. 646–653). https://doi.org/10.1145/3123024.3124426
  • Schmidt, A., Beigl, M., & Gellersen, H.-W. (1999). There is more to context than location. Computers & Graphics, 23(6), 893–901. https://doi.org/10.1016/S0097-8493(99)00120-X
  • Schwerdtfeger, B., Reif, R., Günthner, W. A., & Klinker, G. (2011). Pick-by-vision: There is something to pick at the end of the augmented tunnel. Virtual Reality, 15(2–3), 213–223. https://doi.org/10.1007/s10055-011-0187-9
  • Seeliger, A., Merz, G., Holz, C., & Feuerriegel, S. (2021). Exploring the effect of visual cues on eye gaze during AR-guided picking and assembly tasks. In International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 159–164). https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00041
  • Seeliger, A., Netland, T., & Feuerriegel, S. (2022). Augmented reality for machine setups: Task performance and usability evaluation in a field test. Procedia CIRP, 107(3), 570–575. https://doi.org/10.1016/j.procir.2022.05.027
  • Strang, T., & Linnhoff-Popien, C. (2004). A context modeling survey. In Proceedings of the Workshop on Advanced Context Modeling, Reasoning and Management.
  • Su, Y., Rambach, J., Minaskan, N., Lesur, P., Pagani, A., & Stricker, D. (2019). Deep multi-state object pose estimation for augmented reality assembly. In Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) (pp. 222–227). https://doi.org/10.1109/ISMAR-Adjunct.2019.00-42
  • Subakti, H. (2018). Indoor augmented reality using deep learning for Industry 4.0 smart factories. In 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC) (pp. 63–68). https://doi.org/10.1109/COMPSAC.2018.10204
  • Syiem, B. V., Kelly, R. M., Goncalves, J., Velloso, E., & Dingler, T. (2021). Impact of task on attentional tunneling in handheld augmented reality. In Conference on Human Factors in Computing Systems (pp. 1–14). https://doi.org/10.1145/3411764.3445580
  • Truong-Allié, C., Paljic, A., Roux, A., & Herbeth, M. (2021). User behavior adaptive AR guidance for wayfinding and tasks completion. Multimodal Technologies and Interaction, 5(11), 65–81. https://doi.org/10.3390/mti5110065
  • Wang, X., Ong, S. K., & Nee, A. Y.-C. (2016). Multi-modal augmented-reality assembly guidance based on bare-hand interface. Advanced Engineering Informatics, 30(3), 406–421. https://doi.org/10.1016/j.aei.2016.05.004